0% found this document useful (0 votes)
34 views

Information Sciences: Yannis Marinakis, Magdalene Marinaki, Athanasios Migdalas

This document summarizes a research paper that proposes a new Multi-Adaptive Particle Swarm Optimization algorithm for solving the Vehicle Routing Problem with Time Windows. The algorithm uses three adaptive strategies: 1) a Greedy Randomized Adaptive Search Procedure to generate initial solutions and new solutions during iterations, 2) an adaptive strategy called Adaptive Combinatorial Neighborhood Topology to move particles between solutions, and 3) adaptive parameters for the Particle Swarm Optimization algorithm. The algorithm is tested on standard benchmark problem instances and compared to other PSO variants and top performing algorithms from literature, achieving better or competitive results in most cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Information Sciences: Yannis Marinakis, Magdalene Marinaki, Athanasios Migdalas

This document summarizes a research paper that proposes a new Multi-Adaptive Particle Swarm Optimization algorithm for solving the Vehicle Routing Problem with Time Windows. The algorithm uses three adaptive strategies: 1) a Greedy Randomized Adaptive Search Procedure to generate initial solutions and new solutions during iterations, 2) an adaptive strategy called Adaptive Combinatorial Neighborhood Topology to move particles between solutions, and 3) adaptive parameters for the Particle Swarm Optimization algorithm. The algorithm is tested on standard benchmark problem instances and compared to other PSO variants and top performing algorithms from literature, achieving better or competitive results in most cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Information Sciences 481 (2019) 311–329

Contents lists available at ScienceDirect

Information Sciences
journal homepage: www.elsevier.com/locate/ins

A Multi-Adaptive Particle Swarm Optimization for the Vehicle


Routing Problem with Time Windows
Yannis Marinakis a,∗, Magdalene Marinaki a, Athanasios Migdalas b,c
a
Technical University of Crete, School of Production Engineering and Management, University Campus, Chania 73100, Greece
b
Aristotle University of Thessalonike, Department of Civil Engineering, Thessalonike 54124, Greece
c
Luleå Technical University, Industrial Logistics, Luleå 97187, Sweden

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, a new variant of the Particle Swarm Optimization (PSO) algorithm is pro-
Received 30 December 2017 posed for the solution of the Vehicle Routing Problem with Time Windows (VRPTW). Three
Revised 30 December 2018
different adaptive strategies are used in the proposed Multi-Adaptive Particle Swarm Op-
Accepted 31 December 2018
timization (MAPSO) algorithm. The first adaptive strategy concerns the use of a Greedy
Available online 2 January 2019
Randomized Adaptive Search Procedure (GRASP) that is applied when the initial solutions
Keywords: are produced and when a new solution is created during the iterations of the algorithm.
Particle swarm optimization The second adaptive strategy concerns the adaptiveness in the movement of the particles
Vehicle routing problem with time from one solution to another where a new adaptive strategy, the Adaptive Combinatorial
windows Neighborhood Topology, is used. Finally, there is an adaptiveness in all parameters of the
Combinatorial neighborhood topology Particle Swarm Optimization algorithm. The algorithm starts with random values of the
Greedy randomized adaptive search parameters and based on some conditions all parameters are adapted during the itera-
procedure tions. The algorithm was tested in the two classic sets of benchmark instances, the one
Adaptive strategy
that includes 56 instances with 100 nodes and the other that includes 300 instances with
number of nodes varying between 200 and 10 0 0. The algorithm was compared with other
versions of PSO and with the best performing algorithms from the literature.
© 2019 Elsevier Inc. All rights reserved.

1. Introduction

Vehicle Routing Problem (VRP) is one of the most important problems in Combinatorial Optimization, in Operational
Research and, especially, in the field of Supply Chain Management. A number of variants of the VRP have been proposed,
most of them arising from real life problems where the authors are trying to find the most suitable algorithm for the
solution of their problem. The interesting reader can find a large number of papers [23,24,40,50] and books [16,46] that
cover the problem that he/she would like to solve. One of the most important variants of the VRP is the one that time
windows are used. This is a very important variant as the addition of time windows in the Capacitated Vehicle Routing
Problem leads to more realistic problems. Also, other variants of VRP different than the Capacitated Vehicle Routing Problem
incorporate time windows constraints.


Corresponding author.
E-mail addresses: marinakis@ergasya.tuc.gr (Y. Marinakis), magda@dssl.tuc.gr (M. Marinaki), athmig@ltu.se (A. Migdalas).

https://doi.org/10.1016/j.ins.2018.12.086
0020-0255/© 2019 Elsevier Inc. All rights reserved.
312 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

In the Vehicle Routing Problem with Time Windows (VRPTW) the customers and, possibly, the vehicles are associ-
ated with time windows. Thus, the customers must be serviced within some time window and the vehicles may be al-
lowed to operate within some time window. In this paper, as in most publications concerning the Vehicle Routing Prob-
lem with Time Windows [16,45,46], the objective of the problem concerns the design of least cost routes for a fleet of
identical capacitated vehicles that service geographically scattered customers within pre-specified time windows. Each cus-
tomer must be serviced once by a vehicle within a specified time window and the total demands of the customers ser-
viced by the vehicle must not exceed the capacity of the vehicle. The vehicle must wait until the service is possible, if
it arrives at a customer earlier than the lower bound of the customer’s time window. The depot has, also, a time win-
dow. All the vehicles must return to the depot until its closing time. In this paper, a hierarchical objective function is used
where, initially, the number of vehicles is minimized and, then, for this number of vehicles the total traveled distance is
minimized.
In this paper, a Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm is proposed where three different
adaptive strategies are used. Particle Swarm Optimization algorithm has been proved to be a very efficient algorithm for
the solution of routing type problems. Our research group has efficiently applied variants of the Particle Swarm Optimiza-
tion for the solution of the Capacitated Vehicle Routing Problem [28,30,33], the Vehicle Routing Problem with Stochastic
Demands [31,32], the Probabilistic Traveling Salesman Problem [29] and the Location Routing Problem [27]. Besides our
group, a number of very interesting algorithms based on the PSO algorithm for the solution of VRP variants have been
published by other researchers [1,9,17]. For a complete review about the application of PSO in Vehicle Routing Problems
please see [35]. The differences of the proposed algorithm from the other algorithms proposed by our group are the
three different adaptation strategies described in Section 3.1. In order to see if these adaptation strategies are efficient,
the proposed algorithm is compared with the other versions of PSO that we have proposed and they do not use the adap-
tive strategy for the parameters. The comparisons are performed in the two classic sets of benchmark instances that are
used for testing every algorithm that solves the VRPTW. The one was proposed by Solomon [45] and includes 56 in-
stances divided in 6 sets with number of nodes equal to 100 and the other was proposed by Gehring and Homberger
[12] and includes 300 instances of different sizes divided in 5 sets of instances with number of nodes equal to 200,
40 0, 60 0, 80 0 and 10 0 0 where each one of them contains 60 instances and each different set is divided in 6 subsets of
10 instances each. The proposed algorithm is, also, compared with some of the best performing algorithms (twenty of
them) from the literature used for the solution of the Vehicle Routing Problem with Time Windows in the same sets of
instances.
The motivation that leaded us to extend our previous research and to propose a PSO algorithm with these adaptive
strategies is that there is a large number of routing problems and a larger number of algorithms that solve these problems
and all of them have different parameters (calculated using effective ways by the authors in order to find the suitable
parameters for their algorithms) but there is not a set of parameters that could be applied in every routing problem. We
did not say that we found such a set because it is almost impossible but we propose a way to give to the algorithm the
flexibility to find a suitable set of parameters for each instance without the interference of the user. Thus, having proposed
in the past a number of very efficient algorithms for solving routing type problems with the use of the PSO and without
having an adaptive procedure for finding the best set of parameters, in this research we would like to extend our research
in such a way in order to propose a new version of the PSO algorithm for solving routing type problems using a number
of adaptive strategies to find the most effective set of parameters in order to give very good results. As it will be presented
in the next sections of this paper, these two goals were achieved. We found with an easy and very efficient way a set
of parameters, different for each set but with no large differences between them and we achieved computational results
better (in most of the sets), equal (in other sets) or at least competitive (for a small number of sets) compared either to the
other PSO variants proposed from our research group or, especially, to the most effective methods from the literature used
for solving the Vehicle Routing Problem with Time Windows. Finally, the proposed framework could be applied with the
necessary modifications in other swarm intelligence algorithms.
The rest of the paper is organized as follows: In Section 2, a short literature review concerning both the adaptive strate-
gies and the VRPTW is given, then in Section 3 an analytical presentation of the proposed algorithm is given while in
Section 4 the computational results of the algorithm are presented and analyzed with analytical comparisons with other
algorithms from the literature. Finally, in Section 5 conclusions and future research are given.

2. Literature review

A number of adaptive strategies have been proposed for the calculation of the parameters of the PSO algorithm, Most of
them are denoted as Adaptive Particle Swarm Optimization (APSO) algorithms [49]. Thus, there are papers devoted to the
finding of an adaptive way to increase or decrease the inertia factor, to the finding of the acceleration coefficients or both
through the iterations. Also, fuzzy systems have been used for the adaptiveness of the inertia weight. There is a number of
publications that use an adaptive strategy to change the size of the population using different kinds of adaptiveness. For an
analytical review for the kind of adaptiveness in PSO, please see the review section in paper [34].
A number of algorithms have been proposed for the solution of the Vehicle Routing Problem with Time Windows using
the same formulation as the one used in this paper [2–8,12,13,17,19–22,25,26,36–39,41,42,44,47,48].
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 313

3. Multi-Adaptive Particle Swarm Optimization algorithm

3.1. Analysis of the three novel adaptive strategies of MAPSO

In this section, the proposed Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm is analyzed in detail. It
should be noted that before the detailed analysis of the proposed algorithm, we analyze the three adaptive strategies that
are incorporated in the proposed algorithm.
Initially, in the initialization phase of the algorithm and when a new solution is created during the iterations of the
algorithm, an adaptive strategy based on Greedy Randomized Adaptive Search Procedure (GRASP) [11] is used for the de-
velopment of the initial solutions and of the solutions during the iterations. With this strategy a number of new solutions
are developed where each one is different from the other and it is ensured that the particles (solutions of the problem) are
diversified in order to begin from different regions of the solution space.
The second adaptive strategy that is used concerns the movement of the particles from one solution to another. This is a
very important part of the algorithm as in the past, the main problem in the application of a Particle Swarm Optimization
algorithm in a routing problem and in general in a combinatorial optimization problem, was the transformation of the
solutions from continuous values (that are necessary for the proper operation of the PSO algorithm) to discrete values (that
are necessary in routing problems as in most routing problems a path or multiple paths representations are used) and vice
versa and the subsequent losing of information. One way to avoid this transformation was proposed by our team in [30] and
was denoted as Combinatorial Neighborhood Topology where a Path Relinking procedure was used for the movement of the
particles from one solution to another. The addition of the Path Relinking procedure in the PSO improved significantly the
performance of the PSO algorithm in this kind of problems and gave algorithms that in some problems as in the Vehicle
Routing Problem with Stochastic Demands [31], performed better than classic metaheuristic and evolutionary algorithms.
The use of the Path Relinking [15] gave algorithms that needed a step by step procedure to move from one solution to
another. However, for the solution of the VRPTW, the use of this procedure may lead to a number of infeasible solutions
due to the existence of the time windows and this step by step procedure may move customers (nodes) that they are,
initially, feasibly assigned in routes into, finally, infeasible routes. This is not an unsolved problem as the feasibility of the
routes was gained again by using a suitable strategy, however, this may cause the increase of the computational time and, in
some solutions, the increase of the number of routes. Thus, in this paper, a new version of the Combinatorial Neighborhood
Topology (CNT), the Adaptive Combinatorial Neighborhood Topology (ACNT), is proposed where the conditions of the CNT
are maintained but instead of using the Path Relinking strategy, an Adaptive Memory [43] procedure is proposed. In this
procedure, the main idea is the movement from one solution to another to be realized using whole tours. Thus, two kinds
of adaptive memories are created, where in the first one a number is used that records how many times a tour with a
specific set of nodes (without taking into account the sequence of the nodes) participates in the global best particle. This
number increases (or decreases based on some conditions) the possibilities of a tour to become member of the new solution.
In the second adaptive memory, the same procedure is followed for the personal best solution of each one of the particles.
This distinction is made as in the CNT there is a possibility for the new particle to follow either the personal best solution
or the global best solution (or a new direction as it will be described analytically later). The addition of a number that
records the consecutive iterations where a route has been presented in the global or in the local best solutions is inspired
by the medium and the long term memories of Tabu Search [14].
Finally, in the third adaptive strategy all the parameters (acceleration coefficients, iterations, local search iterations, upper
and lower bounds of the velocities and of the positions and number of particles in each swarm) are adapted during the
procedure and, thus, the algorithm works independently and without any interference from the user. All parameters are
randomly initialized and, afterwards, during the iterations, the parameters are adapted using three different conditions, the
first one is used for all parameters except of the number of particles, the second is used for the increase of the number of
particles and the third one is used for the decrease of the number of particles.
In the proposed algorithm, all parameters including the number of particles, the iterations and the local search iterations
are adapted during the iterations. In the proposed algorithm, the parameters are adapted based on an equation where the
parameters that led to the best so far solution are kept as best so far parameters and the new parameters are calculated
using the summation of the best so far parameters with quantities depending each time from the best so far parameters
and the current parameters. These are the main differences of the proposed algorithm from the above mentioned algorithms
from the literature. In the following subsections, initially, the Constriction Particle Swarm Optimization algorithm, in which
the proposed algorithm is based, is presented and, then, all the new features of the algorithm are presented and analyzed
in detail in the subsequent subsections.

3.2. Constriction Particle Swarm Optimization algorithm

In a PSO algorithm, initially, a set of particles is created randomly where each particle corresponds to a possible solution,
it has a position in the space of solutions and it moves with a given velocity. In the VRPTW, a particle is recorded via
the path representation of the tour, that is, via the specific sequence of the nodes. It should be noted that a classic PSO
algorithm when it is applied in a routing problem it needs a sequence of transformations of the solution from continuous
to discrete values and vice versa in order to calculate the velocity equation of each particle (see below Eq. (1)). This is a
314 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

drawback of the PSO algorithm. The position of each particle is represented by vector xi = (xi1 , xi2 , . . . , xid ), i = 1, 2, . . . , N (N
is the population size and d is the number of the vector’s dimension) and its performance is evaluated on the predefined
fitness function. The velocity vij represents the changes that will be made to move the particle from one position to another.
In a PSO algorithm, the particle can follow its own path, or it can move towards the best position it had during the iterations
(pbestij ) or it can move to the best particle’s position (gbestj ). In constriction PSO [10], the equations used for the velocities
and positions are the following:
vi j (t + 1 ) = χ (vi j (t ) + c1 rand1 ( pbesti j − xi j (t )) + c2 rand2 (gbest j − xi j (t ))) (1)

xi j (t + 1 ) = xi j (t ) + vi j (t + 1 ) (2)
where
2
χ= √ and c = c1 + c2 , c > 4 (3)
| 2 − c − c 2 − 4 c|
t is the iterations counter, c1 and c2 are the acceleration coefficients, rand1 and rand2 are two random variables in the
interval (0, 1).

3.3. Adaptive Combinatorial Neighborhood Topology

In this paper, a new version of the Combinatorial Neighborhood Topology (CNT), the Adaptive Combinatorial Neighbor-
hood Topology (ACNT), is proposed where the conditions of the CNT are maintained but instead of using the Path Relinking
strategy, an Adaptive Memory [43] procedure is proposed. In the ACNT, the positions’ Eq. (2) is not used at all. However, the
role of the velocities’ equation is very important as it will be explained in the following. As it was previously mentioned,
the main difference of the ACNT from the initially proposed CNT is the replacement of the Path Relinking Strategy from an
Adaptive Memory strategy [43]. In the ACNT, two kinds of memories are used, the one is for the global best particle (global
best solution in general as the best particle may change during the iterations and is denoted as global best memory (GBM))
and the other is for every particle separately (it is denoted as personal best memory (PBM)). In these memories, complete
routes with specific nodes are recorded. Two routes with the same set of nodes but with different sequence of the nodes are
recorded as one solution in the memories and the sequence that is kept is the one with the better cost that does not violate
the constraints. In each iteration, there are two possibilities for the routes in the GBM and in the PBM. The one possibility
is that a route that already belongs to the GBM (or to the PBM) appears to the global best particle (or to the personal best
particle) and, thus, the number that shows the times (iterations) where a route (GBMvall and PBMvall where l is the number
of different routes in the GBM and in the PBM, respectively) is in the GBM (or in the PBM) is increased by one. The other
possibility is a new route to appear to the global best particle (or to the personal best particle) and, thus, it has to be added
in the GBM (or in the PBM) with the counter of the routes in the GBM (or in the PBM), l, increased by one and the GBMvall
(or the PBMvall depending on the case) to take an initial value equal to one. It is preferable the best particle to be different
in most iterations as it may be possible different routes to be added in the global best memory.
In general, a particle can either follow its own path, or go towards to its previous best solution, or go towards to the
global best solution (to the best particle in the swarm). This was achieved in the CNT with a Path Relinking strategy. In
the ACNT, a number of routes from the two kind of adaptive memories, the GBM or the PBM, are selected. The routes
are selected from the GBM if the particle goes towards to the global best solution and from the PBM if the particle goes
towards to the personal best solution. The next step of this procedure is how the particle decides to follow the previous
best or the global best of the whole swarm. In this phase of the algorithm, we keep the same procedure as in the CNT, with
the appropriate adaptations [30,31]. Initially, the average value of the velocity equation of each particle is calculated:


d
vi j (t + 1 )
j=1
averagev = (4)
d
Also, the values L1 and L2 are calculated from the following equations:
w1
L1 = (ubound − lbound ) × (w1 − × t ) + lbound (5)
itermax
and
w2
L2 = (ubound − lbound ) × (w2 − × t ) + lbound (6)
2 ∗ itermax
where t is the current iteration, itermax is the maximum number of iterations, ubound and lbound are the upper and lower
bounds for the velocities of each particle. The parameters w1 and w2 control the range of the values L1 and L2 . If in some
iterations the velocity of the particle violates these bounds, then, the velocity is initialized with a new value inside the
bounds. The parameters w1 and w2 should have large values as it is desired the value of L1 to be as large as possible in the
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 315

beginning of the algorithm and to be reduced during the iterations. Also, the value of L2 should be larger than the value of
L1 and, thus, the value of w2 should be larger than the value of w1 .
There are three possibilities from the averagev value:

1. averagev < L1 : there is not any selection from either of the two memories, meaning that the particle follows its own path
(using a local search procedure as it will be described later).
2. L1 ≤ averagev ≤ L2 : the particle selects routes from the PBM.
3. L2 < averagev : the particle selects routes from the GBM.

In the beginning of the algorithm, L1 and L2 have large values. These values decrease during the iterations due to the
fact that in this case the possibility of a selection of routes from the two memories is increased and on the other hand the
possibility of searching by its own the solution space using a local search is decreased.
Thus, depending on the previous selection of L1 and L2 , a number of routes is selected from the memories and a number
of routes is selected from the particles. The percentage of the routes that is selected from the memories is depending on the
average values of the velocities (averagev ). If, in the case that the particle selects to go towards its personal best solution,
this value is near to the L1 , then, most of the routes are selected from the PBM memory and if it is near to L2 , then, most
of the routes are selected from the current solution (as it holds that L1 ≤ averagev < L2 ). On the other hand, in the case that
the particle selects to go towards its global best solution, if the value is near to L2 , then, most of the routes are selected
from the GBM memory and if it is far from L2 , most of the routes are selected from the current solution. The routes are
selected in such a way that two nodes could not belong in two different routes. As with this procedure there is a possibility
some nodes not to be assigned in any routes, a greedy procedure, based on the procedure described in Section 3.4 is used
in order to produce new routes from these nodes. The local search procedure described in Section 3.5 is applied in each one
of the particles.

3.4. Description of the initial solutions

Usually in a PSO algorithm, the initial solutions are produced at random. However, this may lead to very bad initial
solutions due to the fact that there is a number of constraints that we have to take into account in the solution of a VRP
with Time Windows. Thus, it is preferred to use a more systematical way to produce the initial solutions. This was achieved
with the use of a variant of the Greedy Randomized Adaptive Search Procedure (GRASP) [11]. GRASP has been successfully
used in the past for the production of the initial solutions in routing problems in the frame of a PSO algorithm [27–29,33].
GRASP is a two phase procedure where in each step, in the partial solution a node is added until all nodes have been
assigned in a route. The differences from the classic nearest neighborhood procedure is that in the GRASP algorithm not
the best node is added in the solution but the selection is performed randomly (in most cases) from a list, the Restricted
Candidate List (RCL), where the most promising moves (nodes in the specific problem) are stored. When a selection of a
node is performed, the list is updated with the next most possible candidate for inclusion to the route.
The most interesting part of the GRASP algorithm is to find the suitable RCL for the problem. In the Vehicle Routing
Problem with Time Windows in order to find a solution with a Greedy algorithm, we have to take into account which is the
nearest node from the node that is served last and the constraint of the time windows. The nearest node to the last served
node is possible not to be served as the time windows constraint is violated or it can be served but the vehicle should
wait a long time to start the service due to the time window. In order to avoid these two situations, we create the RCL
as follows. In the step by step procedure of adding a node (customer) to a route, there are nodes that already have been
served, nodes that could be served from the current route or nodes that could not be served from the current route. All
nodes that have already been served are not taken into account anymore. For the other nodes, initially, two constraints are
checked. The one is the capacity constraint of the vehicle (if with the residual capacity of the vehicle, the customer could be
served) and the other is the time windows constraint. For any node i, a value (twvali ) is calculated that shows the difference
between the ending time of the time window of the node i and the time that the vehicle arrives to the node i. If this value
is negative, then, the node i could not be a candidate node for inclusion in the route. If from this procedure there is no any
candidate node, the vehicle returns to the depot and a new route begins. On the other hand, if this value is positive, we
have to find a way of how we will combine the twvali and the distance of the current node to node i. Thus, two auxiliary
lists are constructed. In the first one, the nodes based on the distance from node i are added in ascending order while in
the second one the nodes based on the twvali are added in ascending order. Thus, the most preferable nodes are high in
both lists. Then, a new value for each node is calculated (RCLvali ) that is the summation of the order that the node has in
each of the two lists (for example if a node is third in the first list and eighth in the second, its RCLvali is equal to eleven)
and the RCL is constructed based on the RCLvali in ascending order. The length of the RCL is different for each particle and it
is selected randomly. A node is selected to be included in the route randomly from the RCL. In order any node to be added
in the route that is constructed, it is, also, examined if there is enough time for the vehicle to return to the depot. If there
is not enough time, then, the node is rejected from the current route and either the vehicle moves to the next node (if it is
possible) or it returns back to the depot for starting a new route.
316 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

3.5. Description of the local search procedure

A Variable Neighborhood Search (VNS) algorithm [18] suitably adjusted for the Vehicle Routing Problem with Time Win-
dows is applied in each particle in the swarm in order to improve the solutions (either the initial solutions (as the local
search is the second phase of the GRASP) or the solutions produced in each iteration of the algorithm). The local search
procedure begins with the addition of a penalty function (PFl ) in each one of the routes (l). The penalty function shows
if the route is feasible or not based on the demands of the customers. Initially, all routes will be feasible routes. In gen-
eral, when the iterations of the local search algorithm will start, there will be two possibilities for a route, either to have
a positive PFl value or to have a negative PFl value. A positive value means that the route is infeasible (means that a node
should be relocated to another route) and a negative value means that the route is feasible. The higher the PFl value, the
more unacceptable is the route, meaning that from this route we will begin to reduce the PFl value. On the other hand, the
smaller the PFl value, the more acceptable is the route, meaning that in this route we will try to relocate a node from an
infeasible route. The target is all the PFl values to have negative values or values equal to zero.
As the Vehicle Routing Problem with Time Windows is treated as a hierarchical optimization problem where initially the
number of vehicles should be minimized and, then, the corresponding routing cost should be, also, minimized, initially, we
calculate the ideal number of vehicles by dividing the summation of the demands with the capacity of the vehicles (we
consider that there is a homogeneous set of vehicles). If the number of vehicles produced in the construction phase of the
routes is equal to the ideal number of vehicles, (this is a seldom case in the VRPTW), then, there are two kinds of local
search algorithms that are applied in the solution. The one is inside the routes in order to improve the specific route and
the other is either a 1-0 relocate or 1-1 exchange algorithm in order to relocate some nodes between different routes. When
a successful move between two different routes will be performed, then, a local search inside the routes that participate in
this move is applied. On the other hand, if the number of vehicles is larger than the ideal number of vehicles, an attempt to
reduce the number of routes and to approach the ideal number is performed. This attempt is achieved by using a sequence
of 1-0 relocate moves taking into account not to violate the time windows where from the most possible route (the route
with the less number of nodes) nodes are relocated to other routes using either an 1-0 relocate procedure or in general a
k-0 relocate procedure. Then, the PFl value from all remaining routes is calculated. If the PFl value for all routes is negative,
then, the new solution is feasible, local search inside the routes is performed and we proceed by trying to reduce again
the number of routes (if it is not equal to the ideal number) with the same procedure. If a new reduction is not possible,
then, we proceed to the next particle. On the other hand, if there is one or more positive PFl values, then, a sequence of
1-0 relocates is performed where nodes from the routes with positive PFl values are relocated to routes with negative PFl
values until either a feasible solution is achieved or it is proved that from this solution we cannot take a feasible solution
with this number of routes. If the last will happen, then, the last feasible solution for this particle is kept and the algorithm
proceeds to the next particle.

3.6. Adaptiveness in the parameters of the Particle Swarm Optimization

The parameters that should be adapted are the number of particles, the number of iterations, the number of local search
iterations, the c1 and c2 , the upper and lower bounds in velocities and, finally, the w1 and w2 from Eqs. (5) and (6). Initially,
random values of the parameters of PSO are given taking into consideration the fact that these values should not violate
some specific bounds. Thus, for example, the sum of c1 and c2 should be greater than 4 due to Eq. (3) and the upper bounds
should not have values less than zero as in this case the PSO algorithm could not work properly. The number of consecutive
iterations (Max iterations) with no improvement in the results of the best solution was selected to be equal with the initial
number of particles.
The initial velocities are set equal to zero. After the initialization of the particles, the fitness function of each particle
in addition to the best particle are calculated. It is, also, calculated the average value in the fitness function of all particles
and the average value of the best solutions of all particles. In the first iteration, these values are equal for each particle. The
initial random values of all parameters are the best values so far and the algorithm proceeds as a classic constriction PSO.
The best values for each parameter are updated when the average values of the best solutions of all particles in a specific
iteration are less than the best average values of the best solutions of all particles. Three different conditions are controlling
the parameters during the iterations of the algorithm. In the first one, if for a consecutive number of iterations the best
solution has not been improved the values of c1 and c2 , of the upper and lower bounds in positions and in velocities, of the
number of local search iterations, and of the w1 and w2 are updated as follows:
c1 − c1opt c2 − c2opt
c1 = c1opt + and c2 = c2opt + (7)
c1opt c2opt

u positions − UB
u positions = UB + and l positions = −u positions (8)
UB

l positions − V
uvelocities = V + and lvelocities = −uvelocities (9)
V
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 317

Local Search iter − LS


Local Search iter = LS + (10)
LS

w1 = w1opt + α5 and w2 = w2opt + α6 (11)

where c1opt , c2opt , UB, V and LS are the optimum values for c1 , c2 , the upper bounds in positions, the upper bounds in
velocities and the local search iterations, respectively.
In the second condition, the increase of the number of particles is performed. If for a consecutive number of iterations,
the best solution and the average best solution of all particles have not been improved, then, a number of new particles
(Particles) are created. All new particles are initialized from random values and they use as initial values of the parameters
the current values. The number of new particles is calculated from:
Particles − NP
Particles = N P + (12)
NP
where NP is the best value of the number of particles.
Finally, in the third condition, the decrease of the number of particles is performed. If for a consecutive number of
iterations the best solution has not been improved and the best value of a particle is more than 5% of the best parti-
cle of the swarm, then, this particle is deleted from the swarm. The number of consecutive iterations was set equal to
abs(Initial Number o f Particles − Particles ) if the number of particles has been changed at least one time during the itera-
Max iterations
tions, otherwise the value was set equal to Initial Number o f Particles − abs(Max iterations −Local Search iter )
. When the algorithm
converges, except of the best solution, the best parameters have, also, been found.

4. Computational results

4.1. General results

Two sets of total 356 benchmark instances were used to test the algorithm (implemented in modern Fortran, in an
Intel Core i5 2430M@ 2.40GHz processor with 4Gb RAM on Windows 7 Home Premium 64-bit operation system). The
first set proposed by Solomon [45] includes 56 instances divided in 6 sets with number of nodes equal to 100 and the
other proposed by Gehring and Homberger [12] includes 300 instances of different sizes divided in 5 sets of instances with
number of nodes equal to 20 0, 40 0, 60 0, 80 0 and 10 0 0 where each one of them contains 60 instances and each different
set is divided in 6 subsets of 10 instances each. Each one of these subsets in all tables and figures are denoted as R1, R2, C1,
C2, RC1 and RC2. The results in all tables are the average results for the corresponding instances of each subset. This is the
way that most researchers that study the Vehicle Routing Problem with Time Windows present their results. The differences
of the subsets are in the distribution of the nodes in the solutions’ space. All instances that are denoted as R1 and R2 have
randomly generated coordinates from a uniform distribution. On the other hand, all instances that are denoted as C1 and C2
have clustered customers and, finally, the instances that are denoted as RC1 and RC2 have semiclustered customers (some
of them are clustered and some of them are randomly generated). Finally, the differences between the subsets R1, C1 and
RC1 from the subsets R2, C2 and RC2 are that in the first three subsets there exist tighter time windows, shorter scheduling
horizons and smaller vehicle capacities, on the other hand, in the last three subsets there exist softer time windows, longer
scheduling horizons and larger vehicle capacities [36,42].
The efficiency of the proposed Multi-Adaptive Particle Swarm Optimization (MAPSO) algorithm is measured by the qual-
ity of the produced solutions. The quality is given in terms of the relative deviation from the best known solution, that is
ω = (cMAPSO −cBKS )
cBKS %, where cMAPSO denotes the cost of the solution found by MAPSO and cBKS is the cost of the best known
solution. To test the performance of the proposed algorithm we applied MAPSO (and the other algorithms used in the com-
parisons) 30 times to each test instance.
In Table 1 and in Figs. 1 and 2, the results of the proposed algorithm are presented in detail. Table 1 is divided in six
parts depending on the number of nodes (customers) that each one of the subsets belonging to each one of the parts has.
Thus, the first part has 100 nodes (customers) and they are the 56 instances proposed by Solomon [45]. The other five parts
have subsets with 20 0, 40 0, 60 0, 80 0 and 10 0 0 nodes (customers), respectively, and they are the 300 instances proposed
by Gehring and Homberger [12]. In Table 1, the average number of vehicles (NV), the average total traveled distance (TD),
the average quality (ω), the average total traveled distance for the 10 (T Dav10 ), the 20 (T Dav20 ) and the 30 (T Dav30 ) runs,
the average standard deviation for the 10 (std10 ), the 20 (std20 ) and the 30 (std30 ) runs and the average computational time
(AT(s)) in seconds for each instance of each subset are presented, respectively. We present, initially, the average number
of vehicles as we solve the problem as a hierarchical optimization problem where the most important condition is the
minimization of the number of vehicles used for the service of the customers and, then, for this number of vehicles the
second objective function is the minimization of the distance traveled. Thus, a solution with better traveled distance and
with larger number of vehicles was rejected if a solution with smaller number of vehicles and with worst traveled distance
318 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

Fig. 1. Analytical results of the proposed algorithm in the first set of benchmark instances.

Fig. 2. Analytical results of the proposed algorithm in the second set of benchmark instances.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 319

Table 1
Results of the algorithm in the two sets of benchmark instances.

100 customers

NV TD ω T Dav10 std10 T Dav20 std20 T Dav30 std30 AT(s)

R1 11.92 1209.99 0.01 1211.32 0.92 1211.09 0.85 1210.78 0.84 22.58
RC1 11.50 1384.18 0.00 1385.33 0.89 1385.23 0.77 1384.92 0.78 23.43
C1 10.00 828.38 0.00 829.67 0.92 829.47 0.81 829.15 0.82 18.55
R2 2.73 952.06 0.02 953.22 0.88 953.10 0.77 952.78 0.79 22.04
RC2 3.25 1119.60 0.02 1121.09 1.01 1120.78 0.90 1120.44 0.89 21.58
C2 3.00 589.86 0.00 591.20 0.97 590.72 0.64 590.65 0.84 19.55
200 customers
NV TD ω T Dav10 std10 T Dav20 std20 T Dav30 std30 AT(s)

R1 18.20 3618.53 0.25 3621.17 1.46 3620.31 1.35 3620.01 1.24 53.45
RC1 18.00 3187.95 0.37 3190.43 1.74 3189.84 1.36 3189.46 1.28 51.28
C1 18.90 2719.09 0.22 2722.11 1.49 2721.01 1.47 2720.65 1.35 59.22
R2 4.00 2942.64 0.40 2945.46 1.42 2944.66 1.43 2944.28 1.33 55.48
RC2 4.30 2545.46 0.39 2547.02 1.66 2547.44 1.44 2547.09 1.32 54.83
C2 6.00 1832.46 0.05 1834.14 1.73 1834.30 1.34 1834.00 1.23 50.48
400 customers
NV TD ω T Dav10 std10 T Dav20 std20 T Dav30 std30 AT(s)

R1 36.40 8483.26 1.67 8484.92 1.81 8485.19 1.36 8484.84 1.26 99.35
RC1 36.00 7921.42 0.56 7923.14 1.52 7923.49 1.42 7923.08 1.35 97.22
C1 37.60 7220.25 0.75 7222.33 1.52 7222.14 1.40 7221.79 1.29 85.48
R2 8.00 6234.38 1.63 6236.63 1.65 6236.26 1.29 6235.90 1.22 91.32
RC2 8.40 5357.34 1.30 5359.07 1.68 5359.39 1.37 5358.99 1.30 90.35
C2 11.60 3954.52 0.20 3957.09 1.46 3956.48 1.39 3956.14 1.29 86.14
600 customers
NV TD ω T Dav10 std10 T Dav20 std20 T Dav30 std30 AT(s)

R1 54.50 18,569.92 3.52 18,571.66 1.38 18,571.65 1.36 18,571.52 1.25 155.31
RC1 55.00 16,176.27 0.94 16,178.77 1.26 16,178.36 1.40 16,177.95 1.33 154.29
C1 57.30 14,427.88 0.62 14,430.33 1.63 14,429.75 1.36 14,429.45 1.25 152.47
R2 11.00 12,776.09 4.26 12,778.50 1.50 12,778.13 1.45 12,777.73 1.35 158.19
RC2 11.40 10,726.38 1.37 10,728.03 1.55 10,728.29 1.47 10,727.91 1.36 156.31
C2 17.40 7602.39 0.30 7604.41 1.54 7604.27 1.41 7603.89 1.32 153.28
800 customers
NV TD ω T Dav10 std10 T Dav20 std20 T Dav30 std30 AT(s)

R1 72.80 32,240.66 3.51 32,242.37 1.51 32,242.32 1.36 32,242.26 1.28 255.49
RC1 72.00 29,617.50 1.13 29,620.72 2.16 29,619.49 1.39 29,619.12 1.30 252.13
C1 74.90 25,482.40 0.22 25,485.74 1.56 25,484.26 1.40 25,483.93 1.28 235.41
R2 15.00 20,478.57 3.27 20,479.81 1.05 20,480.46 1.35 20,480.13 1.25 251.57
RC2 15.40 16,647.78 1.37 16,649.77 1.33 16,649.54 1.34 16,649.24 1.22 248.44
C2 23.20 11,669.78 0.28 11,671.89 1.59 11,671.58 1.35 11,671.28 1.24 244.19
10 0 0 customers
NV TD ω T Dav10 std10 T Dav20 std20 T Dav30 std30 AT(s)

R1 92.00 50,282.50 5.57 50,284.79 1.59 50,284.34 1.33 50,284.06 1.23 401.38
RC1 90.00 44,962.90 1.53 44,965.60 1.38 44,964.80 1.35 44,964.47 1.24 394.26
C1 94.10 41,982.32 0.73 41,984.47 1.71 41,984.31 1.33 41,983.93 1.26 387.15
R2 19.00 30,314.44 5.40 30,316.87 1.77 30,316.34 1.40 30,315.98 1.30 400.49
RC2 18.20 24,467.57 1.89 24,470.23 1.56 24,469.50 1.44 24,469.16 1.33 395.22
C2 28.90 16,777.49 0.61 16,780.02 1.68 16,779.41 1.46 16,779.07 1.33 391.31

than the other solution was found. The number of vehicles found from the algorithm is very good but we will analyze more
the number of vehicles in the last part of the comparisons where the results of the proposed algorithm are compared with
the results of other algorithms from the literature.
In general, the algorithm performs very well as it can be concluded from the quality of the results and the average
quality of the results. In the small set of instances the algorithm finds almost always the best known solution. The results
in the instances with number of nodes between 200 and 10 0 0 are, also, very good as when we have the clustered instances
(C1 and C2 subsets) the quality of the solutions varies between 0.05 and 0.75 even for the instances with number of nodes
equal to 10 0 0. For the semiclustered instances (RC1 and RC2), the quality of the solutions varies between 0.37 and 1.89 and
when the instances are random (R1 and R2), the quality of the solutions varies between 0.25 and 5.57. In the last case,
although the quality of the solutions is very good and less than 2, when the number of nodes is 200 and 400 the quality of
solutions deteriorates. The reason that this happens is that as the first objective function is the minimization of the number
of the vehicles and as the coordinates in the solution space are random in these instances we are led to a deterioration in
the quality of these solutions.
320 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

Fig. 3. Analytical comparison of the proposed algorithm with two versions of PSO in the first set of benchmark instances.

In order to give the stability of the algorithm we performed initially 10 different runs for each one of the instances
in each one of the sets (356 instances with 10 runs, total 3560 runs of the algorithm). As the results were stabilized in
specific values we increased the number of runs initially to 20 runs for each one of the instances and, finally, to 30 runs
(10680 runs of the algorithm). We tested the algorithm so many times as we would like to see if by increasing the number
of executions of the algorithm, the computational results will be deteriorated. An improvement of the results it was very
difficult to be found as in most of the instances the algorithm found the best known solution or a value near to this solution
from the first ten runs of the algorithm. In Table 1, they are presented three different average values and the corresponding
standard deviation values for the first ten executions of the algorithm (for each instance in each subset), then for the twenty
executions and, finally, for the thirty executions of the algorithm. As it is observed the average values are decreasing and are
close to the best values. Also, the standard deviation is not affected from the increase of the executions of the algorithm. In
most of the cases the standard deviation decreases or remains constant. This means that there are not statistical differences
between the different executions of the algorithm.
As it is known, a swarm intelligence algorithm or an evolutionary algorithm may produce due to its stochasticity very
different results between a number of different executions of the algorithm in the same instance and if an algorithm man-
ages to give so stable results with low standard deviations it means that the algorithm is a very promising and effective
algorithm. Thus, the main reason that we performed this kind of analysis is that we would like to prove this fact for our
algorithm.
Also, for the computational time of the algorithm we could observe that in the clustered instances the algorithm con-
verges faster, that the computational times increase with the number of nodes and that the computational time varies from
around 20 seconds in the small instances to at most 400 seconds when the number of nodes is equal to 10 0 0.
In Figs. 1 and 2, an analytical presentation of the quality of the results of the proposed algorithm (MAPSO) and of the
average quality of the ten runs are presented (MAPSOav ), respectively. In the first figure (Fig. 1), the 56 small instances are
presented, while in the second figure (Fig. 2) the 300 large instances are given, respectively. The order of the instances
in Fig. 1 is the same as in the Tables, meaning R1, RC1, C1, R2, RC2 and C2, while the order in the instances in Fig. 2 is
initially the 50 R1 instances for 20 0, 40 0, 60 0, 80 0 and 10 0 0 nodes, respectively, then the 50 R2 instances, then the 50
RC1 and the 50 RC2 instances and finally, the 50 C1 and the 50 C2 instances. In all figures (Figs. 3–8), the same order of
the instances is followed. From Fig. 1, we can see that in almost all instances the proposed algorithm finds the best known
solution and in the instances that the best known solution was not found the quality of the solution is less than 0.1 in
almost all instances. Fig. 2 gives the same observations as the ones taken from Table 1. More precisely, the quality for the
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 321

Fig. 4. Analytical comparison of the proposed algorithm with two versions of PSO in the second set of benchmark instances.

Fig. 5. Analytical comparison of the proposed algorithm with two versions of the CNTPSO in the first set of benchmark instances.
322 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

Fig. 6. Analytical comparison of the proposed algorithm with two versions of the CNTPSO in the second set of benchmark instances.

Fig. 7. Analytical comparison of the proposed algorithm with another adaptive PSO algorithm (ACNTPSO) in the first set of benchmark instances.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 323

Fig. 8. Analytical comparison of the proposed algorithm with another adaptive PSO algorithm (ACNTPSO) in the second set of benchmark instances.

random instances is, initially, very good and less than 1 but as the number of nodes increased (between instances 40 to
50 and 90 to 100 in Fig. 2) the quality deteriorated significantly. However, the results for the RC1, RC2, C1 and C2 subsets
are much better and with small variations in their qualities than the results of the R1 and R2 as it was explained earlier.
It should be noted that in some instances the quality of the solutions seems to be very bad, as it approaches 9% for the
best known solution. This is due to the fact that the researchers in the Vehicle Routing Problem with Time Windows use
an hierarchical optimization function, where, initially, the number of vehicles is optimized and, afterwards for the selected
number of vehicles the traveled distance is minimized. Of course, as we can see, when we compare the proposed algorithm
with the most promising algorithms from the literature (see Table 4) the algorithm despite this fact, is one of the most
effective in the Cumulative Total Distance in all data sets which is the value the most researchers used for the comparison
of an algorithm with the other algorithms for the solution of the Vehicle Routing Problem with Time Windows.
In Table 2, the average values of the best parameters that are calculated from the proposed algorithm are presented. As
it is mentioned in the description of the algorithm, the algorithm has three different adaptive strategies. One of them is the
adaptation (optimization) of the parameters inside the procedure. We do not use the term optimization of the parameters
but instead of this we use the term adaptation as it is very difficult to find the same set of parameters in every execution
of the algorithm. Usually in the literature, these parameters are given in the initial phase of the algorithm and, then, are
constant for all the iterations of the algorithm. In the proposed algorithm, initially, a random value for each one of these
parameters is given and, then, an adaptation of each one of these parameters is performed inside the algorithm. The signif-
icant outcome of the results as they are presented in Table 2 is that all these parameters converged in values that are not
very different between them. The last outcome shows that the use of the adaptation of the parameters helped the user to
avoid a procedure to find a suitable set of parameters before the starting of the execution of the algorithm which is a very
difficult and time consuming procedure.
More precisely, in Table 2, the nine parameters (number of iterations (Maxiterations), number of local search iterations
(LS), number of particles (NP), the four parameters for the velocities’ equation (c1opt and c2opt , ubound and lbound ) and the two
parameters of the ACNT equations (w1opt and w2opt )) are presented. The structure of the Table is the same as the structure
of the Table 1. The most important outcome of this Table is that the results are very stable independently of the size of the
instance or if the customers are clustered or not. The number of particles that gave the best results varies between 69.48
and 93.10 depending on the set, the c1opt varies between 2.43 and 3.48 and the number of iterations in which the algorithm
converges varies between 167.87 and 196.66. The same small variations, also, hold for the other six parameters. This is very
important as, although it is very difficult to find the optimum set of the parameters, the parameters found in all runs and
324 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

Table 2
Average Values of the Best Parameters taken from the proposed algorithm in the two sets of
benchmark instances.

Max iterations LS NP c1opt c2opt w1opt w2opt ubound lbound


100 customers

R110 184.42 36.67 77.42 2.43 2.55 0.98 1.17 3.18 −3.09
R120 184.62 36.87 77.60 2.56 2.66 1.07 1.55 3.59 −2.76
R130 184.82 37.10 77.56 2.45 2.81 1.32 1.38 3.60 −2.63
RC1 182.96 33.81 93.10 3.02 2.83 1.06 1.30 2.84 −2.83
C1 196.66 36.41 89.88 2.76 2.75 1.25 1.32 3.74 −3.14
R2 185.71 34.98 77.07 2.77 2.65 1.13 1.34 3.33 −2.74
RC2 184.87 32.21 76.08 2.87 2.58 1.03 1.34 3.65 −2.91
C2 185.17 32.47 75.92 3.38 2.90 1.21 1.58 3.76 −2.97
200 customers
R110 179.80 35.90 80.70 2.60 2.59 1.13 1.49 3.34 −3.13
R120 180.09 36.13 80.77 3.07 2.81 1.44 1.75 3.36 −3.13
R130 179.85 36.20 80.87 2.83 3.09 1.58 1.49 3.45 −3.05
RC1 175.81 40.55 82.76 3.10 3.00 1.17 1.48 3.72 −2.98
C1 167.87 34.77 83.66 2.64 2.93 1.25 1.40 3.53 −3.27
R2 189.63 39.29 75.81 2.85 2.56 1.26 1.51 3.57 −3.04
RC2 170.47 37.17 76.35 3.17 2.71 1.16 1.61 3.70 −2.85
C2 179.95 37.22 78.14 2.61 2.73 1.30 1.44 3.62 −3.37
400 customers
R110 174.20 33.90 75.50 2.50 2.39 0.99 1.24 3.55 −3.55
R120 174.35 34.13 75.88 2.76 2.58 1.48 1.34 3.86 −3.09
R130 174.68 34.25 75.87 2.58 2.86 1.08 1.43 3.61 −3.47
RC1 172.76 37.89 85.43 2.89 2.83 1.19 1.77 3.66 −3.18
C1 185.70 38.66 73.47 2.74 2.72 1.13 1.46 3.34 −3.14
R2 174.84 33.96 79.45 2.84 2.89 1.22 1.81 3.80 −3.23
RC2 186.07 34.99 69.48 2.59 2.79 1.43 1.46 3.62 −3.13
C2 175.84 41.47 78.52 2.80 2.69 1.29 1.53 3.65 −2.96
600 customers
R110 176.40 36.30 74.20 2.50 2.59 1.00 1.26 3.05 −3.04
R120 176.48 36.72 74.50 2.94 2.97 1.12 1.54 3.38 −2.78
R130 176.86 36.63 74.70 2.68 2.62 1.41 1.50 3.33 −2.74
RC1 172.90 33.43 78.55 2.53 3.10 1.21 1.77 3.40 −2.62
C1 174.71 36.94 78.32 2.88 3.00 1.17 1.38 3.43 −2.59
R2 184.96 39.63 80.72 2.82 2.74 1.04 1.58 3.62 −3.03
RC2 177.05 33.57 75.84 2.85 2.81 1.29 1.70 3.42 −2.79
C2 181.58 35.70 82.29 2.58 2.84 1.07 1.78 3.52 −3.40
800 customers
R110 184.40 35.20 84.10 2.53 2.44 1.04 1.30 2.98 −3.32
R120 184.41 35.35 84.29 2.69 2.77 1.32 1.46 3.06 −2.95
R130 184.43 35.55 84.46 2.63 2.63 1.23 1.56 3.08 −3.13
RC1 174.02 40.04 77.51 2.92 2.79 1.12 1.63 3.21 −3.14
C1 173.14 33.09 73.76 2.60 2.69 1.21 1.79 3.73 −3.21
R2 181.79 36.97 81.45 3.01 2.85 1.17 1.61 3.32 −3.07
RC2 174.49 37.24 74.71 2.54 3.05 1.45 1.53 3.68 −2.97
C2 175.75 34.36 79.12 3.12 2.86 1.22 1.32 3.49 −2.95
10 0 0 customers
R110 175.70 36.10 80.00 2.45 2.59 1.01 1.28 3.32 −3.35
R120 175.82 36.14 80.45 2.68 2.69 1.26 1.42 3.46 −2.86
R130 176.17 36.22 80.44 2.71 3.08 1.06 1.48 3.55 −3.29
RC1 169.82 36.55 80.54 2.90 2.94 1.27 1.49 3.37 −3.48
C1 173.04 37.29 79.22 2.68 2.87 1.16 1.67 3.21 −3.05
R2 173.50 37.67 79.53 2.82 3.04 1.54 1.92 3.46 −2.86
RC2 179.64 35.84 79.97 2.96 2.80 1.53 1.56 3.36 −3.12
C2 179.36 34.49 76.52 2.68 3.10 1.19 1.71 3.45 −3.19

in different sets are very close between them and there are not any extreme values between them. An extreme value of
500 particles in one instance and in all other instances the best number of particles to be between 70 and 90 will lead to
the fact that the algorithm cannot converge to any parameter values but it will give values strongly dependent from the
instance that it is solved.
Also, in Table 2 for the R1 set (for any number of customers) we present in addition with the average results of the
30 runs of the algorithm, the average results for the first 10 and the 20 runs. The reason that we give these numbers is
that we would like to prove the stability of the algorithm. We observe that independent of the number of executions of the
algorithm, the parameters converge to similar values, thus, we can say that for the proposed algorithm the region where
the best values of the parameters for each instance are, has been located.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 325

Table 3
Comparison of the proposed algorithm with other PSO versions in the two sets of benchmark
instances.

MAPSO CNTPSO1 CNTPSO2 CNTPSOin HybPSO ACNTPSO


100 customers

R1 1209.99 1211.37 1212.87 1214.12 1217.53 1210.08


RC1 1384.18 1386.98 1386.40 1389.14 1392.13 1384.24
C1 828.38 828.38 830.44 828.70 831.16 828.38
R2 952.06 955.60 955.27 958.36 961.86 952.71
RC2 1119.60 1122.30 1122.96 1125.20 1129.54 1120.00
C2 589.86 589.86 591.67 590.36 590.28 589.86
AT(s)100 21.29 22.41 22.28 23.01 23.56 21.85
200 customers
R1 3618.53 3622.61 3623.27 3630.00 3631.28 3621.80
RC1 3187.95 3192.81 3192.46 3199.90 3199.83 3194.91
C1 2719.09 2723.75 2723.25 2732.10 2732.13 2723.51
R2 2942.64 2948.45 2946.25 2955.29 2955.34 2948.35
RC2 2545.46 2553.04 2548.94 2559.16 2560.04 2550.97
C2 1832.46 1836.50 1838.34 1843.85 1842.55 1836.65
AT(s)200 54.12 55.28 55.17 55.49 56.15 54.88
400 customers
R1 8514.11 8490.04 8489.67 8501.35 8497.18 8489.50
RC1 7921.42 7927.69 7926.36 7938.14 7939.26 7928.43
C1 7220.25 7228.83 7226.91 7238.36 7237.82 7228.32
R2 6234.38 6239.81 6238.23 6249.92 6250.31 6241.39
RC2 5357.34 5363.72 5361.49 5373.89 5371.17 5362.63
C2 3954.52 3961.71 3960.29 3971.69 3969.04 3961.65
AT(s)400 91.64 92.15 92.18 92.59 93.18 91.82
600 customers
R1 18,569.92 18,578.00 18,575.70 18,587.66 18,590.57 18,579.04
RC1 16,176.27 16,182.77 16,183.17 16,196.70 16,195.43 16,183.49
C1 14,427.88 14,434.05 14,434.81 14,449.49 14,448.07 14,435.54
R2 12,776.09 12,785.69 12,783.53 12,795.54 12,798.92 12,784.06
RC2 10,726.38 10,732.51 10,736.94 10,745.89 10,744.49 10,732.13
C2 7602.39 7609.35 7610.40 7620.53 7618.37 7607.39
AT(s)600 154.98 155.91 156.18 156.11 157.31 155.08
800 customers
R1 32,240.66 32,249.73 32,252.49 32,269.11 32,269.81 32,253.55
RC1 29,617.50 29,628.14 29,630.40 29,647.45 29,643.18 29,627.98
C1 25,482.40 25,490.10 25,490.95 25,506.69 25,506.75 25,489.86
R2 20,478.57 20,483.56 20,491.11 20,508.33 20,504.99 20,490.26
RC2 16,647.78 16,656.04 16,657.73 16,672.42 16,674.40 16,657.04
C2 11,669.78 11,681.24 11,678.68 11,695.09 11,694.82 11,679.36
AT(s)800 247.87 247.91 247.75 248.85 249.91 247.18
10 0 0 customers
R1 50,282.50 50,296.65 50,296.78 50,315.50 50,320.41 50,293.89
RC1 44,962.90 44,975.69 44,974.54 44,994.37 44,991.54 44,971.80
C1 41,982.32 42,001.79 41,994.18 42,017.42 42,016.71 41,993.43
R2 30,314.44 30,328.71 30,327.24 30,341.30 30,344.15 30,326.22
RC2 24,467.57 24,482.71 24,480.26 24,500.39 24,501.79 24,478.06
C2 16,777.49 16,788.44 16,789.97 16,817.03 16,816.83 16,796.11
AT(s)1000 394.97 398.15 398.28 399.31 402.15 396.28
AT(s)total 160.81 161.96 161.97 162.56 163.71 161.18

4.2. Comparison of the proposed method with other PSO algorithms

In Table 3, a comparison with other versions of the Particle Swarm Optimization algorithm is presented. More precisely,
the algorithm was compared with 5 other versions of PSO that have mostly been published by our research group in the
past for the solution of a variant of the vehicle routing problem. The reason that we use these 5 algorithms is that each one
of them has some specific characteristics and their comparison with the proposed algorithm will show that the adaptive-
ness used in the proposed algorithm leads to a very efficient implementation of PSO for routing type problems. Initially, the
algorithm is compared with the HybPSO algorithm [33]. The HyPSO algorithm was used for the solution of the Capacitated
Vehicle Routing Problem with very good results as in most instances in which it was tested it found the best known solu-
tions. It uses for the production of the initial solutions a Greedy Randomized Adaptive Search Procedure and it uses a variant
of the VNS algorithm for the improvement of the solutions. This algorithm does not use at all the Combinatorial Neighbor-
hood Topology. The second algorithm with which the proposed algorithm is compared is the CNTPSOin algorithm [27]. The
CNTPSOin algorithm uses an initial version of the Combinatorial Neighborhood Topology without using the Eqs. (5) and
(6) and it was used for the solution of the Capacitated Location Routing Problem. When it was published some new best
326 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

solutions were found (these best solutions have been improved more nowadays from other algorithms from the literature).
The third and the fourth algorithms are two versions of the CNTPSO algorithm [30,31], denoted as CNTPSO1 and CNTPSO2.
The CNTPSO is the base of the proposed algorithm as it was analytically explained in previous sections. For the compar-
isons, we use two different versions of the CNTPSO algorithm where in the first one, the CNTPSO1, all the parameters are
optimized before the algorithm begins and the best set of parameters are used for all instances while in the second one, the
CNTPSO2, the algorithm uses the best set of parameters produced by the proposed algorithm, different for each instance.
Finally, the proposed algorithm is compared with the results of another Adaptive PSO algorithm, the ACNTPSO, where this
algorithm uses GRASP and VNS with the same way as in the proposed algorithm but it uses the initial version of CNT
and, instead of equations for the calculation of the adaptiveness of the parameters, it restricts all the parameters in specific
bounds and the adaptation is realized using the summation of each best value of a parameter with a constant (random)
number.
It can be seen from the Table 3 that the algorithm performs better than all the other algorithms used in the comparisons.
In order to analyze better these comparisons as they are very important for the effectiveness of the proposed algorithm
we present the analytical results of the proposed algorithm and of the other five algorithms in six figures in addition to
Table 3. More precisely, the proposed algorithm is compared, initially, with the results of the HybPSO and the CNTPSO
in algorithms in Figs. 3 and 4 (in Fig. 3 the results for the first set of instances and in Fig. 4 the results for the second
set of instances are presented, respectively), afterwards, with the results of the CNTPSO1 and the CNTPSO2 algorithms in
Figs. 5 and 6 (in Fig. 5 the results for the first set of instances and in Fig. 6 the results for the second set of instances are
presented, respectively) and, finally, with the results of ACNTPSO in Figs. 7 and 8 (in Fig. 7 the results for the first set of
instances and in Fig. 8 the results for the second set of instances are presented, respectively).
From Figs. 3 and 4, we can see that the differences in the qualities of the proposed algorithm from HybPSO and the
CNTPSO in are significant. Especially, the results of the HybPSO compared to the results of the proposed algorithm have
deviation more than 1 in the qualities of the results with the proposed algorithm having better results. The main difference
of the proposed algorithm from the other two algorithms is the use of the Adaptive Combinatorial Neighborhood Topology
and, thus, it seems that the addition of this topology gives a more effective algorithm. The most important comparisons are
presented in Figs. 5 and 6 where the proposed algorithm is compared with the CNTPSO algorithm without the use of the
adaptiveness. The improvement in the results is not significant but the proposed algorithm performs better than the other
two algorithms in every instance in both figures. It is very important when the best parameters found from the proposed
algorithm to be used as parameters in the no adaptive CNTPSO algorithm. This algorithm found very good results but not
better or even similar results with the results found by the proposed algorithm. This proves that most important is not the
final parameters but the fact that in the whole procedure the algorithm runs with different parameters which gives more
exploration and exploitation abilities in the algorithm. Of course, the differences in the qualities are around 0.5 which is
not a significant value. Finally, when the algorithm is compared with the other adaptive PSO algorithm (Figs. 7 and 8), the
proposed algorithm performed better in all instances and this fact shows that the adaptation that it is used in the proposed
algorithm is a suitable adaptation for the application of PSO for the problem solved in this paper.
Also, in Table 3 the average running times in all sets and in all executions of the algorithms are presented. We present
separately the average running times when the instances have 100 customers, 200 customers, up to 10 0 0 customers. In
the last line of the Table the average running times are presented. We can see that as the number of nodes increases the
computational running times are also increasing. However, as we can see the proposed algorithm is faster than all the other
PSO implementations. This happens as the proposed algorithm has adaptive procedures in each iteration for finding its
parameters, has the ability to increase or decrease the number of particles, the number of local search iterations and the
iterations of the algorithm. Thus, a more flexible, faster and effective algorithm has been presented.

4.3. Comparison of the proposed method with the most effective algorithms from the literature

In the following, the proposed algorithm is compared with a number of algorithms from the literature. More precisely, in
Table 4 the proposed algorithm is compared with the results of other 20 algorithms in both set of instances. Most of the re-
sults are taken from [17,36,42] and concern the most important algorithms that have been applied in these benchmark sets
for the solution of the VRPTW [7,12,17,36,42,47]. In order to be consistent with the literature we use the same abbreviations
for the algorithms as in [42]. More precisely, the proposed algorithm is compared with a hybrid genetic algorithm (HGSADC)
[47], an Arc-Guided Evolutionary Algorithm (AGEA) [42], an effective local search algorithm (I05) [21], a two-stage heuristic
with ejection pools and generalized ejection chains (LZ) [25], a two-phase metaheuristic (GH02) [13] and an improvement
of the previous algorithm from the same authors denoted as a two-phase hybrid metaheuristic (HG05) [20], a multi-start
local search algorithm (BHD) [7], a multi-parametric evolution strategies algorithm (MBD) [37], a cooperative parallel meta-
heuristic (LC) [4] and a guided cooperative search algorithm (LCK) [5], a branch and price based large neighborhood search
algorithm (PDR) [39], a discrete particle swarm optimization approach (S-PSO) [17], a two-stage hybrid local search (BVH)
[3], an adaptive large neighborhood search (PR) [38], a reactive variable neighborhood search (B) [6], a reactive greedy ran-
domized variable neighborhood tabu search (RPTI) [41], a scatter search algorithm (RC) [44], an evolutionary meta-heuristic
(HG99) [19], an iterated local search algorithm (I06) [22], an active guided evolution strategies algorithm (MB05) [36], a
hybrid shuffled frog leaping algorithm (HSFLA) [26] and a cooperative population learning algorithm (SPLA) [2].
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 327

Table 4
Comparison of the proposed algorithm with other algorithms from the literature in both sets of benchmark instances in CNV and CTD.

Algorithm 100 customers 200 customers 400 customers 600 customers 800 customers 10 0 0 customers

CNV CTD CNV CTD CNV CTD CNV CTD CNV CTD CNV CTD

MAPSO 405 57,197 694 168,461 1380 391,712 2066 802,789 2733 1,361,367 3422 2,087,872
HGSADC [47] 405 57,196 694 168,092 1381 388,013 2068 786,373 2739 1,334,963 3420 2,036,700
AGEA [42] 405 57,216 694 169,163 1381 395,936 2066 816,326 2739 1,424,321 3428 2,144,830
I05 [21] 405 57,444 – – – – – – – – – –
LZ [25] 405 57,368 694 169,296 1382 393,695 2068 802,681 2742 1,372,427 3429 2,071,643
GH02 [13] 406 57,641 696 179,328 1392 428,489 2079 890,121 2760 1,535,849 3446 2,290,367
HG05 [20] 408 57,422 699 180,602 1397 431,089 2088 890,293 2773 1,516,648 3459 2,288,819
BHD [7] 406 57,422 695 172,406 1391 399,132 2084 820,372 2776 1,384,306 3465 2,133,376
MBD [37] 406 56,812 695 169,968 1390 394,818 – – – – – –
LC [4] 407 57,412 694 173,061 1390 408,281 2088 836,261 2766 1,475,281 3451 2,225,366
PDR [39] 405 57,255 694 168,556 1385 389,011 2071 800,797 2745 1,391,344 3432 2,096,823
S-PSO [17] 422 58,677 – – – – – – – – – –
BVH [3] 405 57,567 – – – – – – – – – –
PR [38] 405 57,332 694 169,042 1385 393,210 2071 807,470 2758 1,358,291 3438 2,110,925
B [6] 405 57,710 694 176,244 1390 412,088 – – – – – –
LCK [5] 405 57,360 694 169,958 1389 396,611 2086 809,493 2761 1,412,363 3442 2,133,645
RPTI [41] 406 58,006 – – – – – – – – – –
RC [44] 408 57,589 – – – – – – – – – –
HG99 [19] 406 57,876 – – – – – – – – – –
I06 [22] 407 57,437 694 170,331 1384 401,285 2070 827,192 2750 1,421,225 3431 2,155,374
MB05 [36] – – 694 168,573 1389 390,386 2082 796,172 2765 1,361,586 3446 2,078,110
HSFLA [26] 405 57,186.87 – – – – – – – – – –
SPLA [2] 406 57,198.93 – – – – – – – – – –

In addition to these algorithms, there is a large number of researchers that solved the vehicle routing problem with
time windows with different algorithms. However, these algorithms could not be used in the comparisons as the authors
have tested their algorithms in selected instances from the whole set of the 356 instances and, thus, it is very difficult to
compare them with the proposed algorithm. The algorithms that are used in the comparisons are, mainly, single population
metaheuristic algorithms with very effective strategy for avoiding local optimum, like tabu search, large scale neighborhood
search, variable neighborhood search. There are few evolutionary algorithms with local search and even fewer swarm intel-
ligence algorithms. If we present analytically all the results for each set and each number of nodes, a huge table (or tables)
will be produced and the most important outcomes will not be sufficiently presented. Thus, we present only two values
the Cumulative Number of Vehicles (CNV) and the Cumulative Total Distance (CTD) for each one of the algorithms. If some-
one would like to see the analytical results of all these algorithms it may see either the corresponding publication or the
presentation and analysis in [42].
The results are divided in 6 sets based on the number of nodes. For the CNV the proposed algorithm has the smallest
number of vehicles in all 5 out of 6 categories (in the case of 10 0 0 customers only one algorithm has smallest number of
vehicles). This is a very important outcome as the proposed algorithm is focused on the minimization of the number of
vehicles. For the CTD, the results of the proposed algorithm are very satisfactory. The proposed algorithm has the second
smaller CTD when the number of nodes is equal to 200, the third smaller CTD when the number of customers is equal to
800, the fourth smaller CTD when the number of nodes is equal to 10 0, 40 0 and 10 0 0 and the fifth one when the number
of nodes is equal to 600. Thus, we can say that the proposed algorithm is a very effective algorithm and one of the best
performing algorithms for the solution of the Vehicle Routing Problem with Time Windows.

5. Conclusions and future research

In this paper, a new algorithm based on PSO, the Multi-Adaptive PSO (MAPSO), for the solution of the Vehicle Routing
Problem with Time Windows is presented. The algorithm has three adaptive strategies, one for the initialization of the
solutions using the Greedy Randomized Adaptive Search Procedure (GRASP), the other is the use of an Adaptive Memory
procedure to replace the Path Relinking procedure in the Combinatorial Neighborhood Topology and the last one is an
adaptive strategy for the calculation of the parameters of the PSO. Each of these strategies are very important. The first one
is important as very good initial solutions are produced, the second one is important as the algorithm does not need any
transformation in continuous values and with the use of the adaptive memory, tours that belong for a number of times in
the best solution have a large possibility to belong to the global best solution and, finally, the third one is important as it
gives to the user the possibility to run the algorithm without finding a good set of parameters before starting the execution
of the algorithm as the best values of the parameters are found inside the algorithm during the execution of the algorithm
and, thus, to use the algorithm as a black box. The algorithm was tested in the two classic set of benchmark instances, it
was compared with other PSO implementations and with other algorithms from the literature and gave very good results.
Our future research will focus in the application of the proposed algorithm in more difficult routing problems.
328 Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329

References

[1] T.J. Ai, V. Kachitvichyanukul, A particle swarm optimization for the vehicle routing problem with simultaneous pickup and delivery, Comput. Oper.
Res. 36 (2009) 1693–1702.
[2] D. Barbucha, A cooperative population learning algorithm for vehicle routing problem with time windows, Neurocomputing 146 (2014) 210–229.
[3] R. Bent, P. van Hentenryck, A two-stage hybrid local search for the vehicle routing problem with time windows, Transp. Sci. 38 (2004) 515–530.
[4] A.L. Bouthillier, T.G. Crainic, Co-operative parallel metaheuristic for vehicle routing with time windows, Comput. Oper. Res. 32 (2005) 1685–1708.
[5] A.L. Bouthillier, T.G. Crainic, P. Kropf, A guided cooperative search for the vehicle routing problem with time windows, IEEE Trans. Intell. Syst. 20
(2005) 36–42.
[6] O. Braysy, A reactive variable neighborhood search for the vehicle routing problem with time windows, INFORMS J. Comput. 15 (2003) 347–368.
[7] O. Braysy, G. Hasle, W. Dullaert, A multi-start local search algorithm for the vehicle routing problem with time windows, Eur. J. Oper. Res. 159 (2004)
586–605.
[8] W. Chaovalitwongse, D. Kim, P.M. Pardalos, GRASP with a new local search scheme for vehicle routing problems with time windows, J. Comb. Optim.
7 (2) (2003) 179–207.
[9] A.L. Chen, G.K. Yang, Z.M. Wu, Hybrid discrete particle swarm optimization algorithm for capacitated vehicle routing problem, J. Zhejiang Univ. Sci. A
7 (4) (2006) 607–614.
[10] M. Clerc, J. Kennedy, The particle swarm: explosion, stability and convergence in a multi-dimensional complex space, IEEE Trans. Evol. Comput. 6
(2002) 58–73.
[11] T.A. Feo, M.G.C. Resende, Greedy randomized adaptive search procedure, J. Global Optim. 6 (1995) 109–133.
[12] H. Gehring, J. Homberger, A parallel hybrid evolutionary metaheuristic for the vehicle routing problem with time windows, in: Proceedings of EURO-
GEN 99, University of Jyvaskyla, Jyvaskyla, Finland, 1999, pp. 57–64.
[13] H. Gehring, J. Homberger, Parallelization of a two-phase metaheuristic for routing problems with time windows, J. Heuristics 8 (2002) 251–276.
[14] F. Glover, Tabu search I, ORSA J. Comput. 1 (3) (1989) 190–206.
[15] F. Glover, M. M. Laguna, R. Marti, Scatter search and path relinking: advances and applications, in: F. Glover, G.A. Kochenberger (Eds.), Handbook of
Metaheuristics, Kluwer Academic Publishers, Boston, 2003, pp. 1–36.
[16] B.L. Golden, A.A. Assad, Vehicle routing: methods and studies, North Holland, Amsterdam, 1988.
[17] Y.J. Gong, J. Zhang, O. Liu, R.Z. Huang, H.S.H. Chung, Y.H. Shi, Optimizing the vehicle routing problem with time windows: a discrete particle swarm
optimization approach, IEEE Trans. Syst. Man Cybern.-Part C: Appl. Rev. 42 (2) (2012) 254–267.
[18] P. Hansen, N. Mladenovic, Variable neighborhood search: principles and applications, Eur. J. Oper. Res. 130 (2001) 449–467.
[19] J. Homberger, H. Gehring, Two evolutionary meta-heuristics for the vehicle routing problem with time windows, in: Proceedings INFOR, vol. 37, 1999,
pp. 297–318.
[20] J. Homberger, H. Gehring, A two-phase hybrid metaheuristic for the vehicle routing problem with time windows, Eur. J. Oper. Res. 162 (2005) 220–238.
[21] T. Ibaraki, S. Imahori, M. Kudo, T. Masuda, T. Uno, M. Yagiura, Effective local search algorithms for routing and scheduling problems with general time
window constraints, Transp. Sci. 39 (2005) 206–232.
[22] T. Ibaraki, S. Imahori, K. Nonobe, K. Sobue, T. Uno, M. Yagiura, An iterated local search algorithm for the vehicle routing problem with convex time
penalty functions, Discrete Appl. Math. 156 (2008) 2050–2069.
[23] J. Li, P.M. Pardalos, H. Sun, J. Pei, Y. Zhang, Iterated local search embedded adaptive neighborhood selection approach for the multi-depot vehicle
routing problem with simultaneous deliveries and pickups, Expert Syst. Appl. 42 (7) (2015) 3551–3561.
[24] J. Li, Y. Li, P.M. Pardalos, Multi-depot vehicle routing problem with time windows under shared depot resources, J. Comb. Optim. 31 (2) (2016) 515–532.
[25] A. Lim, X. Zhang, A two-stage heuristic with ejection pools and generalized ejection chains for the vehicle routing problem with time windows,
INFORMS J. Comput. 19 (2007) 443–457.
[26] J. Luo, X. Li, M.R. Chen, H. Liu, A novel hybrid shuffled frog leaping algorithm for vehicle routing problem with time windows, Inf. Sci. 316 (2015)
266–292.
[27] Y. Marinakis, M. Marinaki, A particle swarm optimization algorithm with path relinking for the location routing problem, J. Math. Modell. Algorithms
7 (1) (2008) 59–78.
[28] Y. Marinakis, M. Marinaki, A hybrid genetic - particle swarm optimization algorithm for the vehicle routing problem, Expert Syst. Appl. 37 (2010)
1446–1455.
[29] Y. Marinakis, M. Marinaki, A hybrid multi-swarm particle swarm optimization algorithm for the probabilistic traveling salesman problem, Comput.
Oper. Res. 37 (2010) 432–442.
[30] Y. Marinakis, M. Marinaki, Combinatorial neighborhood topology particle swarm optimization algorithm for the vehicle routing problem, in: M. Mid-
dendorf, C. Blum (Eds.), EvoCOP 2013, LNCS 7832, 2013, pp. 133–144.
[31] Y. Marinakis, M. Marinaki, Combinatorial expanding neighborhood topology particle swarm optimization for the vehicle routing problem with stochas-
tic demands, in: GECCO: 2013, Genetic and Evolutionary Computation Conference, 6–10 July 2013, Amsterdam, The Netherlands, 2013, pp. 49–56.
[32] Y. Marinakis, G. Iordanidou, M. Marinaki, Particle swarm optimization for the vehicle routing problem with stochastic demands, Appl. Soft Comput. 13
(4) (2013) 1693–1704.
[33] Y. Marinakis, M. Marinaki, G. Dounias, A hybrid particle swarm optimization algorithm for the vehicle routing problem, Eng. Appl. Artif. Intell. 23
(2010) 463–472.
[34] Y. Marinakis, M. Marinaki, A. Migdalas, Adaptive tunning of all parameters in a multi-swarm particle swarm optimization algorithm: an application
to the probabilistic traveling salesman problem, in: A. Migdalas, A. Karakitsiou (Eds.), Optimization, Control, and Applications in the Information Age,
Springer Proceedings in Mathematics & Statistics, vol. 130, 2015, pp. 187–207.
[35] Y. Marinakis, M. Marinaki, A. Migdalas, Particle swarm optimization for the vehicle routing problem: a survey and a comparative analysis, in: R. Marti,
et al. (Eds.), Handbook of Heuristics, 2018, pp. 1163–1196.
[36] D. Mester, O. Braysy, Active guided evolution strategies for large-scale vehicle routing problems with time windows, Comput. Oper. Res. 32 (2005)
1593–1614.
[37] D. Mester, O. Braysy, W. Dullaert, A multi-parametric evolution strategies algorithm for vehicle routing problems, Expert Syst. Appl. 32 (2006) 508–517.
[38] D. Pisinger, S. Ropke, A general heuristic for vehicle routing problems, Comput. Oper. Res. 34 (2006) 2403–2435.
[39] E. Prescott-Gagnon, G. Desaulniers, L.M. Rousseau, A branch and price based large neighborhood search algorithm for the vehicle routing problem
with time windows, Networks 54 (2009) 190–204.
[40] Y. Qiu, L. Wang, X. Xu, X. Fang, P.M. Pardalos, Formulations and branch-and-cut algorithms for multi-product multi-vehicle production routing prob-
lems with startup cost, Expert Syst. Appl. 98 (2018) 1–10.
[41] P.P. Repoussis, D.C. Paraskevopoulos, C.D. Tarantilis, G. Ioannou, A reactive greedy randomized variable neighborhood tabu search for the vehicle
routing problem with time windows, in: Hybrid Metaheuristics 2006, LNCS 4030, Springer-Verlag, New York, 2006, pp. 124–138.
[42] P.P. Repoussis, C.D. Tarantilis, G. Ioannou, Arc-guided evolutionary algorithm for the vehicle routing problem with time windows, IEEE Trans. Evol.
Comput. 13 (3) (2009) 624–647.
[43] Y. Rochat, E.D. Taillard, Probabilistic diversification and intensification in local search for vehicle routing, J. Heuristics 1 (1995) 147–167.
[44] R.A. Russell, W.C. Chiang, Scatter search for the vehicle routing problem with time windows, Eur. J. Oper. Res. 169 (2006) 606–622.
[45] M.M. Solomon, Algorithms for the vehicle routing and scheduling problems with time window constraints, Oper. Res. 35 (2) (1987) 254–265.
[46] P. Toth, D. Vigo, Vehicle Routing: Problems, Methods and Applications, Second Edition, MOS-SIAM Series on Optimization, SIAM, Philadelphia, 2014.
Y. Marinakis, M. Marinaki and A. Migdalas / Information Sciences 481 (2019) 311–329 329

[47] T. Vidal, T.G. Crainic, M. Gendreau, C. Prins, A hybrid genetic algorithm with adaptive diversity management for a large class of vehicle routing
problems with time-windows, Comput. Oper. Res. 40 (2013) 475–489.
[48] E.T. Yassen, M. Ayoba, M.Z.A. Nazria, N.R. Sabarc, Meta-harmony search algorithm for the vehicle routing problem with time windows, Inf. Sci. 325
(2015) 140–158.
[49] Z.H. Zhan, J. Zhang, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. Man Cybern. - Part B: Cybern. 39 (6) (2009) 1362–1381.
[50] T. Zhang, W.A. Chaovalitwongse, Y.J. Zhang, P.M. Pardalos, The hot-rolling batch scheduling method based on the prize collecting vehicle routing
problem, J. Ind. Manage. Optim. 5 (4) (2009) 749–765.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy