Jmetal 4.5 User Manual: Antonio J. Nebro, Juan J. Durillo
Jmetal 4.5 User Manual: Antonio J. Nebro, Juan J. Durillo
5 User Manual
Preface 1
1 Overview 3
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Design goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Summary of Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Reference papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Installation 7
2.1 Unpacking the sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Setting the environment variable CLASSPATH . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Compiling the sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Netbeans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Eclipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.2 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 IntelliJ Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.2 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Using the jMetal.jar file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Architecture 13
3.1 Basic Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Encoding of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 jMetal Package Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Case Study: NSGA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Class NSGAII.java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 Class NSGAII main . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
iii
4 Experimentation with jMetal 31
4.1 The jmetal.experiments.Settings Class . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 An example of Setting class: NSGA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 The jmetal.experiments.Main class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Experimentation Example: NSGAIIStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.1 Defining the experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.2 Running the experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.3 Analyzing the output results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5 Experimentation example: StandardStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.6 Experiments when the Pareto fronts of the problems are unknown . . . . . . . . . . . . . 49
4.7 Using quality indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.8 Running experiments in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5 Parallel Algorithms 51
5.1 The IParallelEvaluator Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Evaluating Solutions In Parallel in NSGA-II: pNSGAII . . . . . . . . . . . . . . . . . . . . 52
5.3 About Parallel Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6 How-to’s 57
6.1 How to use binary representations in jMetal . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.2 How to use permutation representations in jMetal . . . . . . . . . . . . . . . . . . . . . . 59
6.3 How to use the Mersenne Twister pseudorandom number generator? . . . . . . . . . . . . 60
6.4 How to create a new solution type having mixed variables? . . . . . . . . . . . . . . . . . 60
6.5 How to obtain the non-dominated solutions from a file? . . . . . . . . . . . . . . . . . . . 62
6.6 How to use the WFG Hypervolume algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.7 How to configure the algorithms from a configuration file? . . . . . . . . . . . . . . . . . . 64
7 What about’s 65
7.1 What about developing single-objective metaheuristics with jMetal? . . . . . . . . . . . . 65
7.2 What about optimized variables and solution types? . . . . . . . . . . . . . . . . . . . . . 65
Bibliography 76
iv
List of Figures
v
vi
List of Tables
5.1 Solving ZDT1 with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2 Solving ZDT1b with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds). ZDT1b is a the problem as ZDT1 but including a idle loop in the evaluation
function to increase its computing time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
vii
viii
Preface
This document contains the manual of jMetal, a framework for multi-objective optimization with meta-
heuristics developed in the Computer Science Department of the University of Málaga.
The jMetal project began in 2006 with the idea of writing, from a former C++ package, a Java tool
to be used in our research in multi-objective optimization techniques with metaheuristic algorithms. We
decided to put the package publicly available in November 2006, and it was allocated at SourceForge
in November 2008 (http://jmetal.sourceforge.net). jMetal is an open source software, and it can
be downloaded from http://sourceforge.net/projects/jmetal; as of today, it has been downloaded
more than 9000 times.
Two versions of jMetal written in different languages are work in progress:
• jMetalCpp (http://jmetalcpp.sourceforge.net). This version is coded in C++ and it is avail-
able since February 2012. It implements about 70% of the Java implementation.
• jMetal.Net (http://jmetalnet.sourceforge.net/), which is implemented in C#. Several pre-
releases are available since June 2011. The last release covers about 10% of the original Java
version.
This manual covers the Java version and it is structured into eight chapters, covering issues such as
installation, architecture description, examples of use, parallelism, a how-to’s section, and a summary
of versions and release notes.
1
2
Chapter 1
Overview
jMetal stands for Metaheuristic Algorithms in Java, and it is an object-oriented Java-based framework
for multi-objective optimization with metaheuristic techniques. jMetal provides a rich set of classes
which can be used as the building blocks of multi-objective techniques; this way, by taking advantage
of code-reusing, the algorithms share the same base components, such as implementations of genetic
operators and density estimators, thus facilitating not only the development of new multi-objective
techniques but also to carry out different kinds of experiments. The inclusion of a number of classical
and state-of-the-art algorithms, many problems usually included in performance studies, and a set of
quality indicators allow not only newcomers to study the basic principles of multi-objective optimization
with metaheuristics but also their application to solve real-world problems.
The jMetal project is continuously evolving. As we are researchers, not a software company, new
versions are released when we require new features to be added into the software to carry out our research
activities.
1.1 Motivation
When we started to work in metaheuristics for multi-objective optimization in 2004, we did not find
any software package satisfying our needs. The implementation in C of NSGA-II, the most used multi-
objective algorithm, publicly available1 , was difficult to be used as the basis of new algorithms, in part
due to its lack of an object-oriented design. An interesting choice was PISA [2], a C-based framework
for multi-object optimization which is based on separating the algorithm specific part of an optimizer
from the application-specific part. This is carried out by using a shared-file mechanism to communicate
the module executing the application with the module running the metaheuristic. A drawback of PISA
is that their internal design hinders to reuse code. From our point of view (we are computer science
engineers), it became clear that it should be easier to develop our own tool starting from scratch that
working with existing software. The result is the Java-based framework jMetal.
When we started to use jMetal in our research, we decided to make it available to the community of
people interested in multi-objective optimization. It is licensed under the GNU Lesser General Public
License, and it can be obtained freely from http://jmetal.sourceforge.net. During the develop-
ment of jMetal, other Java-based software tools have been offered by other groups (e.g., EVA22 , ECJ3 ,
OPT4J4 ). All these toolboxes can be useful enough for many researchers but, while jMetal is specifically
oriented to multi-objective optimization with metaheuristics, most of existing frameworks are focused
mainly on evolutionary algorithms, and many of them are centered in single-objective optimization,
offering extensions to the multi-objective domain.
1 NSGA-II: http://www.iitk.ac.in/kangal/codes.shtml
2 EVA2: http://www.ra.cs.uni-tuebingen.de/software/EvA2/
3 ECJ: http://www.cs.gmu.edu/ eclab/projects/ecj/
4 OPT4J: http://opt4j.sourceforge.net/
3
4 CHAPTER 1. OVERVIEW
However, we have faced frequently the need of solving single-objective optimization problems, so
we used jMetal as well, given that developing single-objective metaheuristics from their multi-objective
counterparts is usually an easy task. Thus, jMetal provides currently many algorithms to solve problems
having a single objective.
– Constrained problems: Srinivas[35], Tanaka [36], Osyczka2 [31], Constr Ex [6], Golinski [20],
Water [32].
– Combinatorial problems: multi objective traveling salesman problem (mTSP), multi objective
quadratic assignment problem (mQAP).
• Single-objective metaheuristics: GAs (generational, steady-state, cellular), PSO, DE, CMA-ES,
(µ + λ) and (µ, λ) ESs.
• Implementation of a number of widely used quality indicators: Hypervolume [44], Spread [6],
Generational Distance [37], Inverted Generational Distance [37], Epsilon [17].
• Different variable representations: binary, real, binary-coded real, integer, permutation.
• Validation of the implementation: we compared our implementations of NSGA-II and SPEA2 with
the original versions, achieving competitive results [9].
• Support for performing experimental studies, including the automatic generation of
– LATEX tables with the results after applying quality indicators,
– LATEX tables summarizing statistical pairwise comparisons by using the Wilcoxon test to the
obtained results, and
– R (http://www.r-project.org/) boxplots summarizing those results.
In addition, jMetal includes the possibility of using several threads for performing these kinds of
experiments in such a way that several independent runs can be executed in parallel by using
modern multi-core CPUs.
• A Web site (http://jmetal.sourceforge.net) containing the source codes, the user manual and,
among other information, the Pareto fronts of the included MOPs, references to the implemented
algorithms, and references to papers using jMetal.
@article{DN11,
author = "J. J. Durillo and A. J. Nebro",
title = "{jMetal}: A Java framework for multi-objective optimization",
journal = "Advances in Engineering Software",
volume = "42",
number = "10",
pages = " 760-771 ",
year = "2011",
note = "",
issn = "0965-9978",
doi = "DOI: 10.1016/j.advengsoft.2011.05.014",
url = "http://www.sciencedirect.com/science/article/pii/S0965997811001219",
}
@inproceedings{DNA10,
Address = {Barcelona, Spain},
Author = { J.J. Durillo and A.J. Nebro and E. Alba },
Booktitle = {CEC 2010},
Month = {July},
6 CHAPTER 1. OVERVIEW
Pages = {4138-4325},
OPTPublisher = {Springer Berlin / Heidelberg},
OPTSeries = {Lecture Notes in Computer Science},
Title = {The {jMetal} Framework for Multi-Objective Optimization: Design and Architecture},
OPTVolume = {5467},
Year = {2010}}
1.5 License
jMetal is licensed under the Creative Commons GNU Lesser General Public License License5
5 http://creativecommons.org/licenses/LGPL/2.1/
Chapter 2
Installation
jMetal is written in Java, not requiring any other additional software. The requirement is to use Java JDK
1.5 or newer. The source code is bundled in a tar.gz package which can be download from SourceForge1 .
A jar file is also available if you are only interested in running the provided algorithms. The jMetal Web
page at SourceForge is: http://jmetal.sourceforge.net.
There exist several ways to work with Java programs; we briefly describe here how to compile and
run algorithms developed with jMetal by using the command line in a text terminal and the Integrated
Development Environments (IDEs) Netbeans2 , Eclipse3 , and IntelliJ Idea4 . There are several ways to
create a project from existing sources with these IDEs; we merely show one way to do it.
In this chapter, we also detail how to execute the algorithms from the jar file, without having to
work with the source code.
gzip -d jmetal.tar.gz
tar xf jmetal.tar
Let us name the directory where the tarball is decompressed as JMETALHOME. As a result, you will
get into it the source code of jMetal, which has the structure depicted in Figure 2.1.
7
8 CHAPTER 2. INSTALLATION
export CLASSPATH=$CLASSPATH:$JMETALHOME
javac jmetal/problems/*.java
javac jmetal/problems/ZDT/*.java
javac jmetal/problems/DTLZ/*.java
javac jmetal/problems/WFG/*.java
javac jmetal/metaheuristics/nsgaII/*.java
javac jmetal/metaheuristics/paes/*.java
javac jmetal/metaheuristics/spea2/*.java
javac jmetal/metaheuristics/mopso/*.java
javac jmetal/metaheuristics/mocell/*.java
javac jmetal/metaheuristics/abyss/*.java
Of course, you do not need to compile all of them; choose only those you are interested in.
1. Configuring the algorithm by editing the NSGA main.java program (see Section 3.3).
Here, we briefly describe the first option, consisting in editing file NSGAII main.java belonging to
the package jmetal/metaheuristics/nsgaII, recompiling, and executing it:
javac jmetal/metaheuristics/nsgaII/*.java
java jmetal.metaheuristics.nsgaII.NSGAII_main
As result, you will obtain to files: VAR, containing the values of the variables of the approximation
set obtained, and FUN, which stores the corresponding values of the objective functions. Needless to
say that you can change the names of these files by editing NSGAII main.java.
2.3 Netbeans
We describe here how to compile and use jMetal with NetBbeans 7.0.1.
2. Choose Java Project with Existing Sources from the General category, and click the Next button.
3. Write a project name (e.g. jMetal) and choose the directory (folder) where you want to deploy the
project. Check Set as Main Project and Click Next.
4. Click Add Folder to add the JMETALHOME directory to the source package folders. Click Finish
2.4 Eclipse
We describe next how to compile and use jMetal using Eclipse Kepler.
2. Write a project name (e.g., jMetal) and click on the Next button.
3. Select Link additional source and browse to set JMETALHOME as linked folder location.
4. Click Finish.
10 CHAPTER 2. INSTALLATION
Architecture
We use the Unified Modelling Language (UML) to describe the architecture and components of jMetal.
A UML class diagram representing the main components and their relationships is depicted in Figure 3.1.
The diagram is a simplified version in order to make it understandable. The basic architecture of
jMetal relies in that an Algorithm solves a Problem using one (and possibly more) SolutionSet and a
set of Operator objects. We have used a generic terminology to name the classes in order to make them
general enough to be used in any metaheuristic. In the context of evolutionary algorithms, populations
and individuals correspond to SolutionSet and Solution jMetal objects, respectively; the same can be
applied to particle swarm optimization algorithms concerning the concepts of swarm and particles.
13
14 CHAPTER 3. ARCHITECTURE
NSGAII
PAES
Operator
# parameters_ : Map uses
+setParameter(name: String, value: Object): void
+getParameter(name:String): Object * AbYSS
+execute(object: Object):Object
Algorithm
# inputParameters_ : Map MOEAD
SolutionSet #outputParameters_ : Map
+add(): void +execute()
1..*
+remove(): void manages +addOperator(name: String, operator:Operator): void
+size(): int +getOperator(name: String): Operator
+replace(): void IBEA
+setInputParameter(name: String, object: Object): void
+getInputParameter(): Object
+setOutputParameter(): void
* +getOutputParameter(name: String, object: Object): Object
Solution
has +getProblem(): Problem MOCell
-fitness: double[]
SolutionType
- size: int solves
+createVariables():Variable[]
SMPSO
1..*
determines Problem
Variable defines #numberOfVariables_ : integer
#numberOfObjectives_ : integer
#numberOfConstraints_ : integer
+evaluate(solution: Solution): void
+evaluateConstraints(solution: Solution): void
BinaryReal
-bits: BitSet Binary Real Schaffer Kursawe ZDT1 DTLZ2 WFG3
-value: double -bits: BitSet -value: double
BinaryReal
Real Permutation
Int BinaryReal
Variable
consist of
1..*
Solution
-fitness: double[]
has
SolutionType
- size: int
+createVariables()
BinaryRealSolutionType IntSolutionType
BinarySolutionType IntRealSolutionType
PermutationSolutionType RealSolutonType
1 // RealSolutionType . java
2
3 package j m e t a l . e n c o d i n g s . s o l u t i o n T y p e ;
4
5 im po rt jmetal . c o r e . Problem ;
6 im po rt jmetal . core . SolutionType ;
7 im po rt jmetal . core . Variable ;
8 im po rt jmetal . e n c o d i n g s . v a r i a b l e . Real ;
9
10 /∗ ∗
11 ∗ C l a s s r e p r e s e n t i n g a s o l u t i o n t y p e composed o f r e a l v a r i a b l e s
12 ∗/
13 p ub l ic c l a s s RealSolutionType extends SolutionType {
14
15 /∗ ∗
16 ∗ Constructor
17 ∗ @param problem
18 ∗ @throws ClassNotFoundException
19 ∗/
20 p u b l i c R e a l S o l u t i o n T y p e ( Problem problem ) throws ClassNotFoundException {
21 s u p e r ( problem ) ;
22 } // C o n s t r u c t o r
23
24 /∗ ∗
25 ∗ Creates the v a r i a b l e s of the s o l u t i o n
26 ∗ @param d e c i s i o n V a r i a b l e s
27 ∗/
28 public Variable [ ] createVariables () {
29 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ p r o b l e m . ge tN um b er Of Va ri ab le s ( ) ] ;
30
31 f o r ( i n t v a r = 0 ; v a r < p r o b l e m . ge tN um be rO fV a ri ab le s ( ) ; v a r++)
32 v a r i a b l e s [ v a r ] = new Real ( p r o b l e m . g e t L o w e r L i m i t ( v a r ) ,
33 problem . getUpperLimit ( var ) ) ;
34
35 return variables ;
36 } // c r e a t e V a r i a b l e s
37 } // R e a l S o l u t i o n T y p e
Listing 3.1: RealSolutionType class, which represents solutions composed of real variables
Listing 3.2: Code of the createVariables() method for creating solutions consisting on a Real, an
Integer, and a Permutation
16 CHAPTER 3. ARCHITECTURE
1 ...
2 // Step 1 : c r e a t i n g a map f o r t h e o p e r a t o r p a r a m e t e r s
3 HashMap p a r a m e t e r s = new HashMap ( ) ;
4
5 // Step 2 : c o n f i g u r e t h e o p e r a t o r
6 p a r a m e t e r s . put ( ‘ ‘ p r o b a b i l i t y ’ ’ , 0 . 9 ) ;
7 p a r a m e t e r s . put ( ‘ ‘ d i s t r i b u t i o n I n d e x ’ ’ , 2 0 . 0 ) ;
8
9 // Step 3 : c r e a t e t h e o p e r a t o r
10 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” SBXCrossover ” , p a r a m e t e r s ) ;
11
12 // Step 4 : add t h e o p e r a t o r t o an a l g o r i t h m
13 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
14 ...
Once we have the means to define or using existing solution representations, we can create solutions
that can be grouped into SolutionSet objects (i.e., populations or swarms).
3.1.2 Operators
Metaheuristic techniques are based on modifying or generating new solutions from existing ones by
means of the application of different operators. For example, EAs make use of crossover, mutation, and
selection operators for modifying solutions. In jMetal, any operation altering or generating solutions (or
sets of them) inherits from the Operator class, as can be seen in Fig. 3.1.
The framework already incorporates a number of operators, which can be classified into four different
classes:
• Crossover. Represents the recombination or crossover operators used in EAs. Some of the included
operators are the simulated binary (SBX) crossover [4] and the two-points crossover for real and
binary encodings, respectively.
• Mutation. Represents the mutation operator used in EAs. Examples of included operators are
polynomial mutation [4] (real encoding) and bit-flip mutation (binary encoding).
• Selection. This kind of operator is used for performing the selection procedures in many EAs. An
example of selection operator is the binary tournament.
• LocalSearch. This class is intended for representing local search procedures. It contains an extra
method for consulting how many evaluations have been performed after been applied.
Each operator contains the setParameter() and getParameter() methods, which are used for
adding and accessing to an specific parameter of the operator. For example, the SBX crossover requires
two parameters, a crossover probability (as most crossover operators) plus a value for the distribution
index (specific of the operator), while a single point mutation operator only requires the mutation
probability. The operators can also receive their parameters by passing them as an argument when the
operator object is created.
It is worth noting that when an operator is applied on a given solution, the solution type of this one
is known. Thus, we can define, for example, a unique two-points crossover operator that can be applied
to binary and real solutions, using the solution type to select the appropriate code in each case.
To illustrate how operators are used and implemented in jMetal, let take as an example the SBX
crossover operator. This has to parameters: the crossover probability and the distribution index. The
way of creating and setting the operator is depicted in Listing 3.3. First, a Java HashMap, a map having
3.1. BASIC COMPONENTS 17
1 ...
2 p u b l i c c l a s s MOMetaheuristic e x t e n d s Algorithm {
3 ...
4 Operator c r o s s o v e r ;
5 crossover = operators . get ( ” crossover ” ) ;
6 ...
7 S o l u t i o n [ ] p a r e n t s = new S o l u t i o n [ 2 ] ;
8 parents [ 0 ] = ( Solution ) selectionOperator . execute ( population ) ;
9 parents [ 1 ] = ( Solution ) selectionOperator . execute ( population ) ;
10 Solution [ ] offSpring = ( Solution [ ] ) crossoverOperator . execute ( parents ) ;
11 ...
12 } // MOMetaheuristic
pairs (name, object), is created to store the operator parameters (line 3), which are set in lines 6-7;
second, the operator is created (line 10) and, finally, it is added to an algorithm in line 13.
To make use of the operator inside a given algorithm, the following steps have to be carried out (see
Listing 3.4). First, the algorithm must get the previously created operator (line 5), which is already to be
used; second, the operator can be executed after invoking its execute() method with the corresponding
parameters. In the case of the SBX crossover the parameters are two solutions previously obtained,
typically after applying a selection operator (lines 8-9). Let us remark here that whenever a crossover
operator is applied to a pair of solutions and the result is a pair another pair of solutions, the code in
Listing 3.4 can remain as is, there is no need of made any modifications.
The implementation of the SBX crossover operator in jMetal is included in class SBXCrossover (see
Listing 3.5). We can see that this class extends the jmetal.operator.crossover class (lines 3 and 5)
and that the two parameters characterizing the operator(crossover probability and distribution index)
are declared in lines 8 and 9. Let us pay attention now to lines 14-15. An operator can be applied to a
given set of encodings, so the adopted approach is to indicate in a list the valid solution types. In the
case of the SBX crossover, the operator is intended to Real and ArrayReal solution types, so they are
included in the list called VALID TYPES. Later, this list is used in the execute() method to check that
the solutions to be combined have the correct representation.
The constructor (lines 19-26) merely gets the map received as argument and checks whether some of
the parameters have to be set.
The execute() method receives as a parameter a generic Java Object (line 37), which must represent
an array of two solutions, the parent solutions (line 38). We can see in lines 46-51 how the VALID TYPES
is used to check that the parent solutions have valid encodings. Finally, the method calls a function do
Crossover() (line 54) which actually performs the crossover and returns an array with the two new
generated solutions, with are the return object of the method (line 56).
3.1.3 Problems
In jMetal, all the problems inherits from class Problem. This class contains two basic methods: evaluate()
and evaluateConstraints(). Both methods receive a Solution representing a candidate solution to
the problem; the first one evaluates it, and the second one determines the overall constraint violation of
this solution. All the problems have to define the evaluate() method, while only problems having side
constraints need to define evaluateConstraints(). The constraint handling mechanism implemented
by default is the one proposed in [6].
A key design feature in jMetal is that the problem defines the allowed solutions types that are
suitable to solve it. Listing 3.6 shows the code used for implementing Kursawe’s problem (irrelevant
code is omitted). As we can observe observe, it extends class Problem (line 5). After that, a constructor
method is defined for creating instances of this problem (lines 9-28), which has two parameters: a string
containing a solution type identifier and the number of decision variables of the problem. As a general
18 CHAPTER 3. ARCHITECTURE
1 // SBXCrossover . j a v a
2 ...
3 package j m e t a l . o p e r a t o r s . c r o s s o v e r ;
4 ...
5 p u b l i c c l a s s SBXCrossover e x t e n d s C r o s s o v e r {
6 ...
7 p u b l i c s t a t i c f i n a l d o u b l e ETA C DEFAULT = 2 0 . 0 ;
8 p r i v a t e Double c r o s s o v e r P r o b a b i l i t y = n u l l ;
9 p r i v a t e d o u b l e d i s t r i b u t i o n I n d e x = ETA C DEFAULT ;
10
11 /∗ ∗
12 ∗ Valid s o l u t i o n types to apply t h i s o p er at o r
13 ∗/
14 p r i v a t e s t a t i c L i s t VALID TYPES = Arrays . a s L i s t ( R e a l S o l u t i o n T y p e . c l a s s ,
15 ArrayRealSolutionType . c l a s s ) ;
16 /∗ ∗
17 ∗ Constructor
18 ∗/
19 p u b l i c SBXCrossover ( HashMap<S t r i n g , Object> p a r a m e t e r s ) {
20 super ( parameters ) ;
21
22 i f ( p a r a m e t e r s . g e t ( ” p r o b a b i l i t y ” ) != n u l l )
23 c r o s s o v e r P r o b a b i l i t y = ( Double ) p a r a m e t e r s . g e t ( ” p r o b a b i l i t y ” ) ;
24 i f ( p a r a m e t e r s . g e t ( ” d i s t r i b u t i o n I n d e x ” ) != n u l l )
25 distributionIndex = ( Double ) p a r a m e t e r s . g e t ( ” d i s t r i b u t i o n I n d e x ” ) ;
26 } // SBXCrossover
27
28 p u b l i c So lut ion [ ] doCrossover ( double p r o b a b i l i t y ,
29 S o l u t i o n parent1 ,
30 S o l u t i o n p a r e n t 2 ) throws JMException {
31 ...
32 }
33
34 /∗ ∗
35 ∗ Executes the operation
36 ∗/
37 p u b l i c O b j e c t e x e c u t e ( O b j e c t o b j e c t ) throws JMException {
38 Solution [ ] parents = ( Solution [ ] ) object ;
39
40 i f ( p a r e n t s . l e n g t h != 2 ) {
41 C o n f i g u r a t i o n . l o g g e r . s e v e r e ( ” SBXCrossover . e x e c u t e : o p e r a t o r n e e d s two ” +
42 ” parents ”) ;
43 ...
44 } // i f
45
46 if( ! ( VALID TYPES . c o n t a i n s ( p a r e n t s [ 0 ] . getType ( ) . g e t C l a s s ( ) ) &&
47 VALID TYPES . c o n t a i n s ( p a r e n t s [ 1 ] . getType ( ) . g e t C l a s s ( ) ) ) ) {
48 C o n f i g u r a t i o n . l o g g e r . s e v e r e ( ” SBXCrossover . e x e c u t e : t h e s o l u t i o n s ” +
49 ” t y p e ” + p a r e n t s [ 0 ] . getType ( ) + ” i s not a l l o w e d with t h i s o p e r a t o r ” ) ;
50 ...
51 } // i f
52
53 Solution [ ] offSpring ;
54 o f f S p r i n g = doCrossover ( c r o s s o v e r P r o b a b i l i t y , parents [ 0 ] , parents [ 1 ] ) ;
55
56 return offSpring ;
57 } // e x e c u t e
58 } // SBXCrossover
jmetal
util qualityIndicators
rule, all the problems should have as first parameter the string indicating the solution type. The basic
features of the problem (number of variables, number of objectives, and number of constraints) are
defined in lines 10-12. The limits of the values of the decision variables are set in lines 15-21. The
sentences between lines 23-27 are used to specify that the allowed solution representations are binary-
coded real and real, so the corresponding SolutionType objects are created and assigned to a state
variable.
After the constructor, the evaluate() method is redefined (lines 33-43); in this method, after com-
puting the two objective function values, they are stored into the solution by using the setObjective
method of Solution (lines 41 and 42).
Many of commonly used benchmark problems are already included in jMetal. Examples are the ones
proposed by Zitzler-Deb-Thiele (ZDT) [42], Deb-Thiele-Laumanns-Zitzler (DTLZ) [5], Walking-Fish-
Group (WFG) test problems [16]), and the Li-Zhang benchmark [22].
3.1.4 Algorithms
The last core class in the UML diagram in Fig. 3.1 to comment is Algorithm, an abstract class which
must be inherited by the metaheuristics included in the framework. In particular, the abstract method
execute() must be implemented; this method is intended to run the algorithm, and it returns as a
result a SolutionSet.
An instance object of Algorithm may require some application-specific parameters, that can be added
and accessed by using the methods addParameter() and getParameter(), respectively. Similarly, an
algorithm may also make use of some operators, so methods for incorporating operators (addOperator())
and to get them (getOperator()) are provided. A detailed example of algorithm can be found in
Section 3.3, where the implementation of NSGA-II is explained.
Besides NSGA-II, jMetal includes the implementation of a number of both classic and modern
multi-objective optimizers; some examples are: SPEA2 [43], PAES [18], OMOPSO [33], MOCell [26],
AbYSS [30], MOEA/D [22], GDE3 [19], IBEA [46], or SMPSO [25].
1 // Kursawe . j a v a
2 ...
3 package j m e t a l . p r o b l e m s ;
4 ...
5 p u b l i c c l a s s Kursawe e x t e n d s Problem {
6 /∗ ∗
7 ∗ Constructor .
8 ∗/
9 p u b l i c Kursawe ( S t r i n g s o l u t i o n T y p e , I n t e g e r n u m b e r O f V a r i a b l e s ) throws
ClassNotFoundException {
10 numberOfVariables = numberOfVariables . intValue ( ) ;
11 numberOfObjectives = 2 ;
12 numberOfConstraints = 0 ;
13 problemName = ” Kursawe ” ;
14
15 upperLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
16 lowerLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
17
18 f o r ( i n t i = 0 ; i < n u m b e r O f V a r i a b l e s ; i ++) {
19 l o w e r L i m i t [ i ] = −5.0 ;
20 upperLimit [ i ] = 5.0 ;
21 } // f o r
22
23 i f ( s o l u t i o n T y p e . compareTo ( ” B i n a r y R e a l ” ) == 0 )
24 s o l u t i o n T y p e = new B i n a r y R e a l S o l u t i o n T y p e ( t h i s ) ;
25 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” Real ” ) == 0 )
26 s o l u t i o n T y p e = new R e a l S o l u t i o n T y p e ( t h i s ) ;
27 }
28 } // Kursawe
29
30 /∗ ∗
31 ∗ Evaluates a s o l u t i o n
32 ∗/
33 p u b l i c v o i d e v a l u a t e ( S o l u t i o n s o l u t i o n ) throws JMException {
34 d o u b l e [ ] f x = new d o u b l e [ 2 ] ; // f u n c t i o n v a l u e s
35
36 f x [ 0 ] = // f 1 v a l u e
37 ...
38 f x [ 1 ] = // f 2 v a l u e
39 ...
40
41 solution . setObjective (0 , fx [ 0 ] ) ;
42 solution . setObjective (1 , fx [ 1 ] ) ;
43 } // e v a l u a t e
44 } // Kursawe
• Package core: This package contains the basic ingredients to be used by the metaheuristics devel-
oped under jMetal. The main classes in this package have been commented in Section 3.1, which
are: Algorithm, Operator, Problem, Variable, Solution, SolutionSet, and SolutionType.
This package was named jmetal.base in versions prior to jMetal 4.0.
• Package problems: All the problems available in jMetal are included in this package. Here we can
find well-known benchmarks (ZDT, DTLZ, and WFG) plus other more recent problem families
(LZ07, CEC2009Competition). Furthermore, we can find many other problems (Fonseca, Kursawe,
Schaffer, OKA2, etc.).
• Package metaheuristics: This package contains the metaheuristics implemented in jMetal. The
list of techniques include NSGA-II, SPEA2, PAES, PESA-II, GDE3, FastPGA, MOCell, AbYSS,
OMOPSO, IBEA, and MOEA/D. Although jMetal is aimed at multi-objective optimization, a
number of single objective algorithms are included in the jmetal.metaheuristics.singleObjective
package.
• Package jmetal.operators: This package contains different kinds of operator objects, including
crossover, mutation, selection, and local search operators. We give next an example of an operator
of each type:
• Package jmetal.encodings: This package contains the basic variable representations and the
solution types included in the framework. In jMetal version 4.0 the following clases are included:
– Variables: Binary, BinaryReal, BinaryReal (binary coded real), Int, Permutation, ArrayInt,
and ArrayReal.
– Solution types: BinarySolutionType, RealSolutionType, BinaryRealRealSolutionType,
IntSolutionType, PermutationSolutionType, ArrayRealSolutionType, ArrayIntSolutionType,
IntRealSolutionType (combines Int and Real variables), and ArrayRealAndBinarySolutionType
(combines ArrayReal and Binary variables).
• Package util: A number of utilities classes are included in this class, as a pseudorandom number
generator, different types of archive, a neighborhood class to be used in cellular evolutionary
algorithms, comparators, etc.
Algorithm
# inputParameters_ : Map
#outputParameters_ : Map
+execute()
+addOperator(name: String, operator:Operator): void
+getOperator(name: String): Operator
+setInputParameter(name: String, object: Object): void
+getInputParameter(): Object
+setOutputParameter(): void
+getOutputParameter(name: String, object: Object): Object
+getProblem(): Problem
Problem
solves
NSGAII
private Problem problem_;
public NSGAII(Problem problem)
public SolutionSet execute()
• Package experiments: This package contains a set of classes intended to carry out typical studies
in multi-objective optimization. It is described in Chapter 4.
1 // NSGAII . j a v a
2
3 package j m e t a l . m e t a h e u r i s t i c s . n s g a I I ;
4
5 im po rt j m e t a l . c o r e . ∗ ;
6
7 /∗ ∗
8 ∗ This c l a s s implements t h e NSGA−I I a l g o r i t h m .
9 ∗/
10 p u b l i c c l a s s NSGAII e x t e n d s Algorithm {
11
12 /∗ ∗
13 ∗ Constructor
14 ∗ @param problem Problem t o s o l v e
15 ∗/
16 p u b l i c NSGAII ( Problem problem ) {
17 s u p e r ( problem ) ;
18 } // NSGAII
19
20 /∗ ∗
21 ∗ Runs t h e NSGA−I I a l g o r i t h m .
22 ∗/
23 public SolutionSet execute () {
24 } // e x e c u t e
25 } // NSGA−I I
as well as the implementation on execute(). Next, we analyze the implementation of the NSGAII class
in jMetal (file jmetal/metaheuristics/nsgaII/NSGAII.java). The basic code structure implementing
the class is presented in listing 3.7.
Let us focus on the method execute() (see Listing 3.8). First, we comment the objects needed
to implement the algorithm. The parameters to store the population size and the maximum number
of evaluations are declared in lines 2-3. The next variable, evaluations, is a counter of the number
of computed evaluations. The objects declared in lines 6-7 are needed to illustrate the use of quality
indicators inside the algorithms; we will explain their use later; lines 10-12 contain the declaration of
the populations needed to implement NSGA-II: the current population, an offspring population, and an
auxiliary population used to join the other two. Next, we find the three genetic operators (lines 14-
16) and a Distance object (from package jmetal.util), which will be used to calculate the crowding
distance.
Once we have declared all the needed objects, we proceed to initialize them (Listing 3.9). The
parameters populationSize and maxEvaluations are input parameters whose values are obtained in
lines 23-24; the same applies to indicators, although this parameter is optional (the other two are
required). The population and the counter of evaluations are initialized next (lines 28-29), and finally
the mutation, crossover, and selection operators are obtained (lines 34-36).
The initial population is initialized in the loop included in Lisrting 3.10. We can observe how new
solutions are created, evaluated, and inserted into the population.
The main loop of the algorithm is included in the piece of code contained in Listing 3.11. We can
observe the inner loop performing the generations (lines 55-71), where the genetic operators are applied.
The number of iterations of this loop is populationSize/2 because it is assumed that the crossover
returns two solutions; in the case of using a crossover operator returning only one solution, the sentence
in line 55 should be modified accordingly.
After the offspring population has been filled, the next step in NSGA-II is to apply ranking and
crowding to the union of the current and offspring populations to select the new individuals in the next
generation. The code is included in Listing 3.12, which basically follows the algorithm described in [6].
24 CHAPTER 3. ARCHITECTURE
20 ...
21
22 // Read t h e p a r a m e t e r s
23 populationSize = ( ( I n t e g e r ) getInputParameter ( ” populationSize ” ) ) . intValue ( ) ;
24 maxEvaluations = ( ( I n t e g e r ) g e t I n p u t P a r a m e t e r ( ” maxEvaluations ” ) ) . i n t V a l u e ( ) ;
25 i n d i c a t o r s = ( Q u a l i t y I n d i c a t o r ) getInputParameter ( ” i n d i c a t o r s ” ) ;
26
27 // I n i t i a l i z e t h e v a r i a b l e s
28 p o p u l a t i o n = new S o l u t i o n S e t ( p o p u l a t i o n S i z e ) ;
29 evaluations = 0;
30
31 requiredEvaluations = 0;
32
33 // Read t h e o p e r a t o r s
34 m u t a t i o n O p e r a t o r = o p e r a t o r s . g e t ( ” mutation ” ) ;
35 crossoverOperator = operators . get ( ” crossover ” ) ;
36 selectionOperator = operators . get ( ” s e l e c t i o n ” ) ;
37 ...
38 ...
39 // C r e a t e t h e i n i t i a l s o l u t i o n S e t
40 Solution newSolution ;
41 f o r ( i n t i = 0 ; i < p o p u l a t i o n S i z e ; i ++) {
42 n e w S o l u t i o n = new S o l u t i o n ( p r o b l e m ) ;
43 problem . e v a l u a t e ( newSolution ) ;
44 problem . e v a l u a t e C o n s t r a i n t s ( newSolution ) ;
45 e v a l u a t i o n s ++;
46 p o p u l a t i o n . add ( n e w S o l u t i o n ) ;
47 } // f o r
48 ...
49 // G e n e r a t i o n s
50 w h i l e ( e v a l u a t i o n s < maxEvaluations ) {
51
52 // C r e a t e t h e o f f S p r i n g s o l u t i o n S e t
53 o f f s p r i n g P o p u l a t i o n = new S o l u t i o n S e t ( p o p u l a t i o n S i z e ) ;
54 S o l u t i o n [ ] p a r e n t s = new S o l u t i o n [ 2 ] ;
55 f o r ( i n t i = 0 ; i < ( p o p u l a t i o n S i z e / 2 ) ; i ++) {
56 i f ( e v a l u a t i o n s < maxEvaluations ) {
57 // o b t a i n p a r e n t s
58 parents [ 0 ] = ( Solution ) selectionOperator . execute ( population ) ;
59 parents [ 1 ] = ( Solution ) selectionOperator . execute ( population ) ;
60 Solution [ ] offSpring = ( Solution [ ] ) crossoverOperator . execute ( parents ) ;
61 mutationOperator . execute ( o f f S p r i n g [ 0 ] ) ;
62 mutationOperator . execute ( o f f S p r i n g [ 1 ] ) ;
63 problem . e v a l u a t e ( o f f S p r i n g [ 0 ] ) ;
64 problem . e v a l u a t e C o n s t r a i n t s ( o f f S p r i n g [ 0 ] ) ;
65 problem . e v a l u a t e ( o f f S p r i n g [ 1 ] ) ;
66 problem . e v a l u a t e C o n s t r a i n t s ( o f f S p r i n g [ 1 ] ) ;
67 o f f s p r i n g P o p u l a t i o n . add ( o f f S p r i n g [ 0 ] ) ;
68 o f f s p r i n g P o p u l a t i o n . add ( o f f S p r i n g [ 1 ] ) ;
69 e v a l u a t i o n s += 2 ;
70 } // i f
71 } // f o r
The piece of code in Listing 3.13 illustrates the use of quality indicators inside a metaheuristic.
In particular, it shows the code we used in [29] to study the convergence speed of multi-objective
metaheuristics. As we commented before, if the indicator object was specified as input parameter
(otherwise, it would be null - line 120), we apply it to test whether the hypervolume of the new population,
at the end of each generation, is equal of greater than the 98% of the hypervolume of the true Pareto
front (see [29] for further details). In case of success, the variable requiredEvaluations is assigned
the current number of function evaluations (line 124). Once this variable is not zero, we do not need to
carry out the test any more; that is the reason of including the condition in line 121.
The last sentences of the execute() method are included in Listing 3.14. In line 129 we can observe
that the variable requiredEvaluations is returned as an output parameter. Finally, we apply ranking to
the resulting population to return only non-dominated solutions (lines 132-133).
72
73 // C r e a t e t h e s o l u t i o n S e t union o f s o l u t i o n S e t and o f f S p r i n g
74 union = ( ( S o l u t i o n S e t ) p o p u l a t i o n ) . union ( o f f s p r i n g P o p u l a t i o n ) ;
75
76 // Ranking t h e union
77 Ranking r a n k i n g = new Ranking ( union ) ;
78
79 i n t remain = p o p u l a t i o n S i z e ;
80 i n t index = 0 ;
81 SolutionSet front = null ;
82 population . c l e a r () ;
83
84 // Obtain t h e n e x t f r o n t
85 f r o n t = ranking . getSubfront ( index ) ;
86
87 w h i l e ( ( remain > 0 ) && ( remain >= f r o n t . s i z e ( ) ) ) {
88 // A s s i g n c r o w d i n g d i s t a n c e t o i n d i v i d u a l s
89 d i s t a n c e . crowdingDistanceAssignment ( front , problem . getNumberOfObjectives ( ) ) ;
90 //Add t h e i n d i v i d u a l s o f t h i s f r o n t
91 f o r ( i n t k = 0 ; k < f r o n t . s i z e ( ) ; k++) {
92 p o p u l a t i o n . add ( f r o n t . g e t ( k ) ) ;
93 } // f o r
94
95 // Decrement remain
96 remain = remain − f r o n t . s i z e ( ) ;
97
98 // Obtain t h e n e x t f r o n t
99 i n d e x ++;
100 i f ( remain > 0 ) {
101 f r o n t = ranking . getSubfront ( index ) ;
102 } // i f
103 } // w h i l e
104
105 // Remain i s l e s s than f r o n t ( i n d e x ) . s i z e , i n s e r t o n l y t h e b e s t one
106 i f ( remain > 0 ) { // f r o n t c o n t a i n s i n d i v i d u a l s t o i n s e r t
107 d i s t a n c e . crowdingDistanceAssignment ( front , problem . getNumberOfObjectives ( ) ) ;
108 f r o n t . s o r t ( new CrowdingComparator ( ) ) ;
109 f o r ( i n t k = 0 ; k < remain ; k++) {
110 p o p u l a t i o n . add ( f r o n t . g e t ( k ) ) ;
111 } // f o r
112
113 remain = 0 ;
114 } // i f
115 ...
116 // This p i e c e o f code shows how t o u s e t h e i n d i c a t o r o b j e c t i n t o t h e code
117 // o f NSGA−I I . I n p a r t i c u l a r , i t f i n d s t h e number o f e v a l u a t i o n s r e q u i r e d
118 // by t h e a l g o r i t h m t o o b t a i n a P a r e t o f r o n t with a hypervolume h i g h e r
119 // than t h e hypervolume o f t h e t r u e P a r e t o f r o n t .
120 i f ( ( i n d i c a t o r s != n u l l ) &&
121 ( r e q u i r e d E v a l u a t i o n s == 0 ) ) {
122 d o u b l e HV = i n d i c a t o r s . getHypervolume ( p o p u l a t i o n ) ;
123 i f (HV >= ( 0 . 9 8 ∗ i n d i c a t o r s . getTrueParetoFrontHypervolume ( ) ) ) {
124 requiredEvaluations = evaluations ;
125 } // i f
126 } // i f
Listing 3.13: execute() method: using the hyper volume quality indicator.
3.3. CASE STUDY: NSGA-II 27
127 ...
128 // Return a s ou tp ut p a r a m e t e r t h e r e q u i r e d e v a l u a t i o n s
129 se t Ou tp ut Pa ram e te r ( ” e v a l u a t i o n s ” , r e q u i r e d E v a l u a t i o n s ) ;
130
131 // Return t h e f i r s t non−dominated f r o n t
132 Ranking r a n k i n g = new Ranking ( p o p u l a t i o n ) ;
133 return ranking . getSubfront (0) ;
134 } // e x e c u t e
1 // NSGAII main . j a v a
2 //
3 // Author :
4 // Antonio J . Nebro <a n t o n i o @ l c c . uma . es>
5 // Juan J . D u r i l l o <d u r i l l o @ l c c . uma . es>
6 //
7 // C o p y r i g h t ( c ) 2011 Antonio J . Nebro , Juan J . D u r i l l o
8 //
9 // This program i s f r e e s o f t w a r e : you can r e d i s t r i b u t e i t and / o r modify
10 // i t under t h e terms o f t h e GNU L e s s e r G e n e r a l P u b l i c L i c e n s e a s p u b l i s h e d by
11 // t h e Fr e e S o f t w a r e Foundation , e i t h e r v e r s i o n 3 o f t h e L i c e n s e , o r
12 // ( a t your o p t i o n ) any l a t e r v e r s i o n .
13 //
14 // This program i s d i s t r i b u t e d i n t h e hope t h a t i t w i l l be u s e f u l ,
15 // but WITHOUT ANY WARRANTY; w i t h o u t even t h e i m p l i e d warranty o f
16 // MERCHANTABILITY o r FITNESS FOR A PARTICULAR PURPOSE. See t h e
17 // GNU L e s s e r G e n e r a l P u b l i c L i c e n s e f o r more d e t a i l s .
18 //
19 // You s h o u l d have r e c e i v e d a copy o f t h e GNU L e s s e r G e n e r a l P u b l i c L i c e n s e
20 // a l o n g with t h i s program . I f not , s e e <h t t p : / /www. gnu . o r g / l i c e n s e s / >.
21
22 package j m e t a l . m e t a h e u r i s t i c s . n s g a I I ;
23
24 im po rt jmetal . core . ∗ ;
25 im po rt jmetal . operators . crossover . ∗ ;
26 im po rt jmetal . o p e r a t o r s . mutation . ∗ ;
27 im po rt jmetal . operators . selection . ∗ ;
28 im po rt jmetal . problems . ∗ ;
29 im po rt jmetal . p r o b l e m s . DTLZ . ∗ ;
30 im po rt jmetal . p r o b l e m s .ZDT . ∗ ;
31 im po rt jmetal . p r o b l e m s .WFG. ∗ ;
32 im po rt jmetal . p r o b l e m s . LZ09 . ∗ ;
33
34 im po rt jmetal . u t i l . Configuration ;
35 im po rt j m e t a l . u t i l . JMException ;
36 im po rt j a v a . i o . IOException ;
37 im po rt java . u t i l .∗ ;
38
39 im po rt j a v a . u t i l . l o g g i n g . F i l e H a n d l e r ;
40 im po rt j a v a . u t i l . l o g g i n g . Logger ;
41
42 im po rt j m e t a l . q u a l i t y I n d i c a t o r . Q u a l i t y I n d i c a t o r ;
44 p u b l i c c l a s s NSGAII main {
45 p u b l i c s t a t i c Logger logger ; // Logger o b j e c t
46 public s t a t i c FileHandler f il e Ha nd le r ; // F i l e H a n d l e r o b j e c t
47
48 /∗ ∗
49 ∗ @param a r g s Command l i n e arguments .
50 ∗ @throws JMException
51 ∗ @throws IOException
52 ∗ @throws S e c u r i t y E x c e p t i o n
53 ∗ Usage : t h r e e o p t i o n s
54 ∗ − j m e t a l . m e t a h e u r i s t i c s . n s g a I I . NSGAII main
55 ∗ − j m e t a l . m e t a h e u r i s t i c s . n s g a I I . NSGAII main problemName
56 ∗ − j m e t a l . m e t a h e u r i s t i c s . n s g a I I . NSGAII main problemName p a r e t o F r o n t F i l e
57 ∗/
58 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws
and the program will calculate a number of quality indicator values at the end of the execution of
the algorithm. This option is also a requirement to used quality indicators inside the algorithms.
Listing 3.17 contains the code used to declare the objects required to execute the algorithm (lines
59-63). The logger object is initialized in lines 69-72, and the log messages will be written in a file
named ”NSGAII main.log”. The sentences included between lines 74 and 92 process the arguments of
the main() method. The default problem to solve is indicated after line 85. The key point here is that,
at end of this block of sentences, an instance of the Problem class must be obtained. This is the only
argument needed to create an instance of the algorithm, as we can see in line 94. The next line contains
the sentence that should be used if we intend to use the steady-state version of NSGA-II which is also
included in jMetal.
Once an object representing the algorithm to run has been created, it must be configured. In the
code included in Listing 3.18, the input parameters are set in lines 97-98, the crossover and mutation
operators are specified in lines 101-109, and the selection operator is chosen in line 113. Once the
operators have been specified, they are added to the algorithm object in lines 116-118. The sentence in
line 121 sets the indicator object as input parameter.
When the algorithm has been configured, it is executed by invoking its execute() method (line 125
in Listing 3.19). When it has finished, the running time is reported, and the obtained solutions and
their objectives values are stored in two files (lines 131 and 133). Finally, if the indicator object is not
null, a number of quality indicators are calculated (lines 136-141) and printed, as well as the number of
evaluations returned by the algorithm as an output parameter.
3.3. CASE STUDY: NSGA-II 29
Listing 3.17: NSGAII main: declaring objects, processing the arguments of main(), and creating the
algorithm.
30 CHAPTER 3. ARCHITECTURE
96 // Algorithm p a r a m e t e r s
97 algorithm . setInputParameter ( ” populationSize ” ,100) ;
98 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , 2 5 0 0 0 ) ;
99
100 // Mutation and C r o s s o v e r f o r Real c o d i f i c a t i o n
101 p a r a m e t e r s = new HashMap ( ) ;
102 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 0 . 9 ) ;
103 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , 2 0 . 0 ) ;
104 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” SBXCrossover ” , p a r a m e t e r s ) ;
105
106 p a r a m e t e r s = new HashMap ( ) ;
107 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 1 . 0 / problem . g et N um be rO fV ar ia bl es ( ) ) ;
108 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , 2 0 . 0 ) ;
109 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” P o l y n o m i a l M u t a t i o n ” , p a r a m e t e r s ) ;
110
111 // S e l e c t i o n O p e r a t o r
112 parameters = n u l l ;
113 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
114
115 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
116 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
117 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
118 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
119
120 // Add t h e i n d i c a t o r o b j e c t t o t h e a l g o r i t h m
121 algorithm . setInputParameter ( ” i n d i c a t o r s ” , i n d i c a t o r s ) ;
122 ...
123 // Execute t h e Algorithm
124 l o n g i n i t T i m e = System . c u r r e n t T i m e M i l l i s ( ) ;
125 SolutionSet population = algorithm . execute () ;
126 l o n g estimatedTime = System . c u r r e n t T i m e M i l l i s ( ) − i n i t T i m e ;
127
128 // R e s u l t m e s s a g e s
129 l o g g e r . i n f o ( ” T o t a l e x e c u t i o n time : ”+estimatedTime + ”ms” ) ;
130 l o g g e r . i n f o ( ” V a r i a b l e s v a l u e s have been w r i t e n t o f i l e VAR” ) ;
131 p o p u l a t i o n . p r i n t V a r i a b l e s T o F i l e ( ”VAR” ) ;
132 l o g g e r . i n f o ( ” O b j e c t i v e s v a l u e s have been w r i t e n t o f i l e FUN” ) ;
133 p o p u l a t i o n . p r i n t O b j e c t i v e s T o F i l e ( ”FUN” ) ;
134
135 i f ( i n d i c a t o r s != n u l l ) {
136 l o g g e r . i n f o ( ” Quality i n d i c a t o r s ” ) ;
137 l o g g e r . i n f o ( ” Hypervolume : ” + i n d i c a t o r s . getHypervolume ( p o p u l a t i o n ) ) ;
138 l o g g e r . i n f o ( ”GD : ” + indicators . getGD ( p o p u l a t i o n ) ) ;
139 l o g g e r . i n f o ( ”IGD : ” + indicators . getIGD ( p o p u l a t i o n ) ) ;
140 l o g g e r . i n f o ( ” Spread : ” + indicators . getSpread ( population ) ) ;
141 logger . info (” Epsilon : ” + indicators . getEpsilon ( population ) ) ;
142
143 i n t e v a l u a t i o n s = ( ( I n t e g e r ) a l g o r i t h m . getOutputParameter ( ” e v a l u a t i o n s ” ) ) . i n t V a l u e
() ;
144 l o g g e r . i n f o ( ” Speed : ” + evaluations + ” evaluations ”) ;
145 } // i f
146 } // main
147 } // NSGAII main
Listing 3.19: NSGAII main: running the algorithms and reporting results
Chapter 4
In our research work, when we want to assess the performance of a multi-objective metaheuristic, we
usually compare it with other algorithms over a set of benchmark problems. After choosing the test
suites and the quality indicators to apply, we carry out a number of independent runs of each experiments
and after that we analyze the results.
Typically, we follow these steps:
1. Configure the algorithms by setting the parameter values in an associated Settings object (see
Subsection 4.1).
2. Optionally, configure the problems to solve. For example, the DTLZ problems are configured by
default with three objectives, while the WFG ones are bi-objective. If we want to modify these
default settings, we have to do it by changing them in the files defining the problems.
3. Execute a number of independent runs per each par (algorithm, problem).
4. Analyze the results. jMetal can generate Latex tables and R scripts to present the results and to
provide statistical information.
To carry out these steps, we use the jmetal.experiments package, first available in jMetal 2.0, and
this chapter is devoted mainly to explaining the use of thas package. First, we describe the structure
of the jmetal.experiments.Settings class and how it can be used to configure NSGA-II; then, we
analyze the jmetal.experiments.Main class. Finally, we illustrate with two examples the use of the
jmetal.experiments.Experiment class.
31
32 CHAPTER 4. EXPERIMENTATION WITH JMETAL
1 // S e t t i n g s . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s ;
4 ...
5 /∗ ∗
6 ∗ Class representing Settings objects .
7 ∗/
8 public abstract class Settings {
9 p r o t e c t e d Problem p r o b l e m ;
10 p r o t e c t e d S t r i n g problemName ;
11 public String paretoFrontFile ;
12
13 /∗ ∗
14 ∗ Constructor
15 ∗/
16 public Settings () {
17 } // C o n s t r u c t o r
18
19 /∗ ∗
20 ∗ Constructor
21 ∗/
22 p u b l i c S e t t i n g s ( S t r i n g problemName ) {
23 problemName = problemName ;
24 } // C o n s t r u c t o r
25
26 /∗ ∗
27 ∗ D e f a u l t c o n f i g u r e method
28 ∗ @return A problem with t h e d e f a u l t c o n f i g u r a t i o n
29 ∗ @throws j m e t a l . u t i l . JMException
30 ∗/
31 a b s t r a c t p u b l i c Algorithm c o n f i g u r e ( ) throws JMException ;
32
33 /∗ ∗
34 ∗ C o n f i g u r e method . Change t h e d e f a u l t c o n f i g u r a t i o n
35 ∗/
36 p u b l i c f i n a l Algorithm c o n f i g u r e ( HashMap s e t t i n g s ) throws JMException {
37 ...
38 } // c o n f i g u r e
39
40 /∗ ∗
41 ∗ Changes t h e problem t o s o l v e
42 ∗ @param problem
43 ∗/
44 v o i d s e t P r o b l e m ( Problem problem ) {
45 p r o b l e m = problem ;
46 } // s e t P r o b l e m
47
48 /∗ ∗
49 ∗ Returns t h e problem
50 ∗/
51 Problem getProblem ( ) {
52 return problem ;
53 }
54 } // S e t t i n g s
• The problem can be set either when creating the object (lines 22-24), either by using the method
setProblem() (lines 51-52).
• The default settings are stablished in the configure() method (line 31). This method must be
defined in the corresponding subclasses of Settings.
• The values of the parameters can be modified by using a Java HashMap object, passing it as an
argument to second definition of the configure() method (line 36).
34 CHAPTER 4. EXPERIMENTATION WITH JMETAL
39 ...
40 /∗ ∗
41 ∗ C o n f i g u r e NSGAII with u s e r −d e f i n e d p a r a m e t e r s e t t i n g s
42 ∗ @return A NSGAII a l g o r i t h m o b j e c t
43 ∗ @throws j m e t a l . u t i l . JMException
44 ∗/
45 p u b l i c Algorithm c o n f i g u r e ( ) throws JMException {
46 Algorithm a l g o r i t h m ;
47 Selection selection ;
48 Crossover crossover ;
49 Mutation mutation ;
4.2. AN EXAMPLE OF SETTING CLASS: NSGA-II 35
50
51 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
52
53 // C r e a t i n g t h e a l g o r i t h m . There a r e two c h o i c e s : NSGAII and i t s s t e a d y −
54 // s t a t e v a r i a n t ssNSGAII
55 a l g o r i t h m = new NSGAII ( p r o b l e m ) ;
56 // a l g o r i t h m = new ssNSGAII ( p r o b l e m ) ;
57
58 // Algorithm p a r a m e t e r s
59 algorithm . setInputParameter ( ” populationSize ” , populationSize ) ;
60 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , m a x E v a l u a t i o n s ) ;
61
62 // Mutation and C r o s s o v e r f o r Real c o d i f i c a t i o n
63 p a r a m e t e r s = new HashMap ( ) ;
64 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , c r o s s o v e r P r o b a b i l i t y ) ;
65 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , c r o s s o v e r P r o b a b i l i t y ) ;
66 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” SBXCrossover ” , p a r a m e t e r s ) ;
67
68 p a r a m e t e r s = new HashMap ( ) ;
69 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , m u t a t i o n P r o b a b i l i t y ) ;
70 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , m u t a t i o n D i s t r i b u t i o n I n d e x ) ;
71 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” P o l y n o m i a l M u t a t i o n ” , p a r a m e t e r s ) ;
72
73 // S e l e c t i o n O p e r a t o r
74 parameters = n u l l ;
75 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
76
77 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
78 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
79 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
80 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
81 return algorithm ;
82 } // c o n f i g u r e
83 } // N S G A I I S e t t i n g s
The implementation of the configure() method is included in Listing 4.3, where we can observe
that it contains basically the same code used in the NSGAII main class to configure the algorithm.
To modify specific parameters, we make use of a Java HashMap object. The map is composed of pairs
(key, value), where the key and the value are strings. The idea is that the state variables defined in the
subclass of Settings are used as keys in the properties object. As commented before, those variables
must be public, and their identifiers must end with the underscore (‘ ’) character.
Let us illustrate this with some pieces of code:
• Creating an instance of NSGA II with the default parameter settings by using class NSGAII Settings:
1 Algorithm n s g a I I = new N S G A I I S e t t i n g s ( problem ) ;
• Let us modify the crossover probability, which is set in the crossoverProbability (Listing 4.2,
line 34) to 1.0 (the default value is 0.9):
1 HashMap p a r a m e t e r s = new HashMap ( ) ;
2 p a r a m e t e r s . put ( ” c r o s s o v e r P r o b a b i l i t y ” , 1 . 0 ) ;
3 Algorithm n s g a I I = new N S G A I I S e t t i n g s ( problem ) . c o n f i g u r e ( p a r a m e t e r s ) ;
1 // Main . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s ;
4 ...
5 /∗ ∗
6 ∗ Class f o r running algorithms
7 ∗/
8 p u b l i c c l a s s Main {
9 /∗ ∗
10 ∗ @param a r g s Command l i n e arguments .
11 ∗ @throws JMException
12 ∗ @throws IOException
13 ∗ @throws S e c u r i t y E x c e p t i o n
14 ∗ Usage : t h r e e o p t i o n s
15 ∗ − j m e t a l . e x p e r i m e n t s . Main algorithmName
16 ∗ − j m e t a l . e x p e r i m e n t s . Main algorithmName problemName
17 ∗ − j m e t a l . e x p e r i m e n t s . Main algorithmName problemName p a r e t o F r o n t F i l e
18 ∗ @throws ClassNotFoundException
19 ∗/
20 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws
21 JMException , . . . {
22 Problem problem ; // The problem t o s o l v e
23 Algorithm a l g o r i t h m ; // The a l g o r i t h m t o u s e
24
25 Q u a l i t y I n d i c a t o r i n d i c a t o r s ; // O b j e c t t o g e t q u a l i t y i n d i c a t o r s
26
27 Settings settings = null ;
28
29 S t r i n g algorithmName = ”” ;
30 S t r i n g problemName = ” Kursawe ” ; // D e f a u l t problem
31 S t r i n g p a r e t o F r o n t F i l e = ”” ;
32 ...
33 // Execute t h e Algorithm
34 l o n g i n i t T i m e = System . c u r r e n t T i m e M i l l i s ( ) ;
35 SolutionSet population = algorithm . execute () ;
36 l o n g estimatedTime = System . c u r r e n t T i m e M i l l i s ( ) − i n i t T i m e ;
37
38 // R e s u l t m e s s a g e s
39 l o g g e r . i n f o ( ” T o t a l e x e c u t i o n time : ”+estimatedTime + ”ms” ) ;
40 l o g g e r . i n f o ( ” O b j e c t i v e s v a l u e s have been w r i t e n t o f i l e FUN” ) ;
41 p o p u l a t i o n . p r i n t O b j e c t i v e s T o F i l e ( ”FUN” ) ;
42 l o g g e r . i n f o ( ” V a r i a b l e s v a l u e s have been w r i t e n t o f i l e VAR” ) ;
43 p o p u l a t i o n . p r i n t V a r i a b l e s T o F i l e ( ”VAR” ) ;
44
45 i f ( i n d i c a t o r s != n u l l ) {
46 l o g g e r . i n f o ( ” Quality i n d i c a t o r s ” ) ;
47 l o g g e r . i n f o ( ” Hypervolume : ” + i n d i c a t o r s . getHypervolume ( p o p u l a t i o n ) ) ;
48 l o g g e r . i n f o ( ”GD : ” + indicators . getGD ( p o p u l a t i o n ) ) ;
49 l o g g e r . i n f o ( ”IGD : ” + indicators . getIGD ( p o p u l a t i o n ) ) ;
50 l o g g e r . i n f o ( ” Spread : ” + indicators . getSpread ( population ) ) ;
51 logger . info (” Epsilon : ” + indicators . getEpsilon ( population ) ) ;
52
53 i f ( a l g o r i t h m . getOutputParameter ( ” e v a l u a t i o n s ” ) != n u l l ) {
54 I n t e g e r e v a l s = ( I n t e g e r ) a l g o r i t h m . getOutputParameter ( ” e v a l u a t i o n s ” ) ;
55 int evaluations = ( Integer ) evals . intValue () ;
56 l o g g e r . i n f o ( ” Speed : ” + evaluations + ” evaluations ”) ;
57 } // i f
58 } // i f
59 } // Main
60 } // Main
This method is invoked automatically in each independent run, por each problem and algorithm.
The key is that a Settings object with the desired parameterization has to be created in order to
get the Algorithm to be executed:
25 ...
26 try {
27 i n t numberOfAlgorithms = a l g o r i t h m N a m e L i s t . l e n g t h ;
28
40 CHAPTER 4. EXPERIMENTATION WITH JMETAL
In this example, as we are interested in four configurations of NSGA-II, with four different crossover
probabilities, we define a Java HashMap object per algorithm (line 29) to indicate the desired values
(lines 35-38). The code between lines 40-44 is used to incorporate the names of the Pareto front
files if they are specified. Finally, the Algorithm objects are created and configure and they are
ready to be executed (lines 46,47).
2. Once we have defined the algorithmSettings method, we have to do the same with the main
method. First, an object of the NSGAIIStudy must be created:
51 ...
52 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws JMException , IOException {
53 NSGAIIStudy exp = new NSGAIIStudy ( ) ;
54 ...
3. We need to give a name to the experiment (note: take into account that this name will be used
to generate Latex tables, so you should avoid using the underscore symbol ’ ’). In this case, we
choose the same name of the class, ”NSGAIIStudy”.
55 ...
56 exp . experimentName = ”NSGAIIStudy” ;
57 ...
4. We have to indicate: the names of the algorithms, the problems to solve, the names of the files
containing the Pareto fronts, and a list of the quality indicators to apply:
58 ...
59 exp . a l g o r i t h m N a m e L i s t = new S t r i n g [ ] { ”NSGAIIa” , ”NSGAIIb” , ”NSGAIIc” , ”
NSGAIId” } ;
60 exp . p r o b l e m L i s t = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”DTLZ1” , ”WFG2
”} ;
61 exp . p a r e t o F r o n t F i l e = new S t r i n g [ ] { ”ZDT1 . p f ” , ”ZDT2 . p f ” , ”ZDT3 . p f ” , ”ZDT4 . p f ” ,
”DTLZ1 . 2D. p f ” , ”WFG2. 2D. p f ” } ;
62 exp . i n d i c a t o r L i s t = new S t r i n g [ ] { ”HV” , ”SPREAD” , ”IGD” , ”EPSILON” } ;
63 ...
The algorithm names are merely tags that will be used to create the output directories and the
tables. The problem names must be the same used in jmetal.problems. We must note that:
4.4. EXPERIMENTATION EXAMPLE: NSGAIISTUDY 41
• The order of the names of the Pareto front files must be the same as the name of the problems
in the problem list.
• If we use the names of the Pareto front files that can be found in the jMetal Web site, when
indicating a DTLZ problem (as DTLZ1), we must indicate the 2D file (DTLZ1.2D.pf) if we
intend to solve it using a bi-objective formulation. Furthermore, we have to modify the
problem classes, as DTLZ1.java, to indicate two objectives.
The same holds if we want to solve the WFG problems: by default they are defined as bi-objective,
so they have to be modified to solved them with more objectives.
5. The next step is to indicate the output directory and the directory where the Pareto front files are
located:
64 ...
65 exp . e x p e r i m e n t B a s e D i r e c t o r y = ” / U s e r s / a n t o n i o / Softw / p r u e b a s / j m e t a l / ” +
66 exp . experimentName ;
67 exp . p a r e t o F r o n t D i r e c t o r y = ” / U s e r s / a n t o n i o / Softw / p r u e b a s / data / p a r e t o F r o n t s ” ;
68 ...
6. Once everything is configured, the array containing the Settings of the algorithms must be ini-
tialized:
69 ...
70 exp . a l g o r i t h m S e t t i n g s = new S e t t i n g s [ numberOfAlgorithms ] ;
71 ...
9. Finally, we execute the algorithms. The runExperiment() method has an optional parameter (the
default value is 1) indicating the number of threads to be created to run the experiments (see
Section 4.8 for further details):
78 ...
79 // Run t h e e x p e r i m e n t s
80 i n t numberOfThreads ;
81 exp . runExperiment ( numberOfThreads = 4 ) ;
82 ...
10. Optionally, we may be interested in generating Latex tables and statistical information of the
obtained results. Latex tables are produced by the following command:
83 ...
84 // G e n e r a t e l a t e x t a b l e s
85 exp . g e n e r a t e L a t e x T a b l e s ( ) ;
86 ...
In case of being interested in getting boxplots, it is possible to obtain R scripts to generate them.
In that case, you need to invoke the generateRBoxplotScripts() method:
42 CHAPTER 4. EXPERIMENTATION WITH JMETAL
87 ...
88 // C o n f i g u r e t h e R s c r i p t s t o be g e n e r a t e d
89 i n t rows ;
90 i n t columns ;
91 String prefix ;
92 S t r i n g [ ] problems ;
93
94 rows = 2 ;
95 columns = 3 ;
96 p r e f i x = new S t r i n g ( ” Problems ” ) ;
97 p r o b l e m s = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”DTLZ1” , ”WFG2” } ;
98
99 b o o l e a n notch ;
100 exp . g e n e r a t e R B o x p l o t S c r i p t s ( rows , columns , problems , p r e f i x , notch = t r u e , exp )
;
101 ...
This method generates R scripts which produce .eps files containing rows × columns boxplots of
the list of problems passed as third parameter. It is necessary to explicitly indicate the problems
to be consider in the boxplots because if there are too much problems, the resulting graphics will
be very small and difficult to see. In this situation, several calls to generateRBoxplotScripts()
can be included. The name of the scripts will start by the prefix specified in the fourth parameter
plus the name of the quality indicator, ended with the suffix ”Botxplot.R”. The last parameter
indicates whether notched boxplots should be generated or not.
Additionally, a method called generateRWilcoxonScripts() is available. This method is intended
to apply the Wilcoxon rank-sum test to the obtained results:
102 ...
103 exp . g e n e r a t e R W i l c o x o n S c r i p t s ( problems , p r e f i x , exp ) ;
104 ...
For each indicator, a file with suffix ”Wilcox.R” will be generated. Once each of these scripts is
executed, a latex file will be yielded as output. Please, see next section for further details.
Since jMetal 4.4, the Friedman test can be applied to the results of each quality indicator, as
illustrated next:
105 ...
106 // Applying t h e Friedman t e s t
107 Friedman t e s t = new Friedman ( exp ) ;
108 t e s t . e x e c u t e T e s t ( ”EPSILON” ) ;
109 t e s t . e x e c u t e T e s t ( ”HV” ) ;
110 t e s t . e x e c u t e T e s t ( ”SPREAD” ) ;
Figure 4.1: Output directories and files after running the experiment.
latex NSGAIIStudy.tex
dvipdf NSGAIIStudy.dvi
to get a pdf file. Alternatively, you could invoke the pdflatex command:
pdflatex NSGAIIStudy.tex
As an example of the obtained output, Table 4.1 includes the mean and standard deviation of the
results after applying the hypervolume indicator, and the median and interquartile range (IQR) values
are in Table 4.2.
Table 4.3 includes the mean and standard deviation of the results after applying the hypervolume
indicator, and the median and interquartile range (IQR) values are in Table 4.4.
An interesting issue is that it would be interesting to have a ranking of the performance of the
compared algorithms. The Friedman test can be used to get this ranking. After executing the following
sentence:
pdflatex FriedmanTestEPSILON.tex
a FriedmanTestEPSILON.pdf file containing Table 4.5 is obtained. Similar tables are produced for the
rest of quality indicators1
Table 4.5: Average Rankings of the algorithms according to the Epsilon indicador
Algorithm Ranking
NSGAIIa 1.1666666666666665
NSGAIIb 2.6666666666666665
NSGAIIc 2.8333333333333335
NSGAIId 3.3333333333333335
The R directory stores the R scripts. As commented before, the script names are composed of
the indicated prefix (”Problems” in the example), the name of the quality indicator, having the ”R”
extension. Those ending in ”Boxplot.R” yield as a results eps files containing boxplots of the values of
the indicators, while those ending in ”Wilcox.R” contain the scripts to produce latex tables including
the application of the Wilcoxon test.
To run the scripts, if you have properly installed R in your computer, you can type the following
commands:
1 The Friedman test assume that the lower the values of the indicators the better. This is true in all the indicators but
0.3265
0.6595
0.3260
0.6590
0.3255
0.3250
0.6585
0.3245
0.6580
NSGAIIa NSGAIIc NSGAIIa NSGAIIc NSGAIIa NSGAIIc
0.5
0.564
0.4
0.655
0.563
0.3
0.650
0.562
0.2
0.645
0.561
0.1
0.640
0.560
0.0
Figure 4.2: Boxplots of the values obtained after applying the hypervolume quality indicator (notch =
true).
Rscript ZDT.HV.Boxlplot.R
Rscript ZDT.HV.Wilcox.R
Rscript ZDT.EPSILON.Boxplot.R
Rscript ZDT.EPSILON.Wilcox.R
...
Alternatively, if you are working with a UNIX machine, you can type:
for i in *.R ; do Rscript $i 2>/dev/null ; done
As a result, you will get the same number of files, but with the .eps extension. Figure 4.2 shows the
Problems.HV.Boxplot.eps file. Without entering into details about the results, in the notched boxplot,
if two boxes’ notches do not overlap then it is supposed with strong evidence that their medians differ,
so we could conclude that NSGAIIa provides the best overall quality indicator values in the experiment
with confidence. We can invoke the generateRBoxplotScripts() method with the notch parameter
equal to false if we are not interested in including this feature in the box plots.
Alternatively to using boxplots, the Wilcoxon rank-sum test can be used to determine the significance
of the obtained results. To apply the Wilcoxon test to two distributions a and b, we use the R formula:
wilcox.test(a,b). The latex files produced when the ”Wilcox.R” scripts are executed contains two
types of tables: one per problem, and a global table summarizing the results. We include the tables of
the first type corresponding to the hypervolume indicator in Tables 4.6 to 4.11; Table ?? groups the
other tables into one. In each table, a N or a O symbol implies a p-value < 0.05, indicating than the
null hyphotesis (the two distribution have the same median) is rejected; otherwise, a − is used. The N
is used when the algorithm in the row obtained a better value than the algorithm in the column; the O
indicates de the opposite.
To illustrate the use of the Wilcoxon rank-sum test in other studies, we include a table that was used
in [28] in Table 4.13.
46 CHAPTER 4. EXPERIMENTATION WITH JMETAL
Table 4.13: LZ09 benchmark. Results of the Wilcoxon rank-sum test applied to the IHV values [28].
NSGA-IIr NSGA-IIa NSGA-IIde
NSGA-II O O O O O O O O O O O O O O O O O O O O N N N N N N O
NSGA-IIr − N N O − O N N N O N N N N N N N N
NSGA-IIa O N N N N N N N N
4.5. EXPERIMENTATION EXAMPLE: STANDARDSTUDY 47
We can observe that this method is simpler than in the case of NSGAIIStudy, because we assume that
each algorithm is configured in its corresponding setting class. We test five metaheuristics: NSGAII,
SPEA2, MOCell, SMPSO, and GDE3 (lines 24-28).
The main method is included below, where we can observe the algorithm name list (lines 42-43), the
problem list (lines 44-48), and the list of the names of the files containing the Pareto fronts (lines 49-56):
37 ...
38 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws JMException , IOException {
39 StandardStudy exp = new StandardStudy ( ) ;
40
41 exp . experimentName = ” StandardStudy ” ;
42 exp . a l g o r i t h m N a m e L i s t = new S t r i n g [ ] {
43 ”NSGAII” , ”SPEA2” , ”MOCell” , ”SMPSO” , ”GDE3” } ;
44 exp . p r o b l e m L i s t = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”ZDT6” ,
45 ”WFG1” , ”WFG2” , ”WFG3” , ”WFG4” , ”WFG5” , ”WFG6” ,
46 ”WFG7” , ”WFG8” , ”WFG9” ,
47 ”DTLZ1” , ”DTLZ2” , ”DTLZ3” , ”DTLZ4” , ”DTLZ5” ,
48 ”DTLZ6” , ”DTLZ7” } ;
49 exp . p a r e t o F r o n t F i l e = new S t r i n g [ ] { ”ZDT1 . p f ” , ”ZDT2 . p f ” , ”ZDT3 . p f ” ,
48 CHAPTER 4. EXPERIMENTATION WITH JMETAL
50 ”ZDT4 . p f ” , ”ZDT6 . p f ” ,
51 ”WFG1. 2D. p f ” , ”WFG2. 2D. p f ” , ”WFG3. 2D. p f ” ,
52 ”WFG4. 2D. p f ” , ”WFG5. 2D. p f ” , ”WFG6. 2D. p f ” ,
53 ”WFG7. 2D. p f ” , ”WFG8. 2D. p f ” , ”WFG9. 2D. p f ” ,
54 ”DTLZ1 . 3D. p f ” , ”DTLZ2 . 3D. p f ” , ”DTLZ3 . 3D. p f ” ,
55 ”DTLZ4 . 3D. p f ” , ”DTLZ5 . 3D. p f ” , ”DTLZ6 . 3D. p f ” ,
56 ”DTLZ7 . 3D. p f ” } ;
57 ...
The rest of the code is similar to NSGAIIStudy: the list of indicators is included in line 59, the
directory to write the results and the one containing the Pareto fronts are specified next (lines 63-65),
the number of independent runs is indicated in line 69, the experiment is initialized in line 71, and the
method to run the algorithm is invoked (lines 74-75):
58 ...
59 exp . i n d i c a t o r L i s t = new S t r i n g [ ] { ”HV” , ”SPREAD” , ”EPSILON” } ;
60
61 i n t numberOfAlgorithms = exp . a l g o r i t h m N a m e L i s t . l e n g t h ;
62
63 exp . e x p e r i m e n t B a s e D i r e c t o r y= ” / U s e r s / a n t o n i o / Softw / p r u e b a s / j m e t a l / ” +
64 exp . experimentName ;
65 exp . p a r e t o F r o n t D i r e c t o r y = ” / U s e r s / a n t o n i o / Softw / p r u e b a s / data / p a r e t o F r o n t s ” ;
66
67 exp . a l g o r i t h m S e t t i n g s = new S e t t i n g s [ numberOfAlgorithms ] ;
68
69 exp . i n d e p e n d e n t R u n s = 100;
70
71 exp . i n i t E x p e r i m e n t ( ) ;
72
73 // Run t h e e x p e r i m e n t s
74 i n t numberOfThreads ;
75 exp . runExperiment ( numberOfThreads = 4 ) ;
76 ...
Finally, we generate the Latex tables and generate the R scripts. Note that we invoke three times
the methods generateRBoxplotsScript() and generateRWilcoxonScript(), one per problem family.
The reason is that otherwise the resulting graphs and tables would not fit into an A4 page:
77 ...
78 // G e n e r a t e l a t e x t a b l e s
79 exp . g e n e r a t e L a t e x T a b l e s ( ) ;
80
81 // C o n f i g u r e t h e R s c r i p t s t o be g e n e r a t e d
82 i n t rows ;
83 i n t columns ;
84 String prefix ;
85 S t r i n g [ ] problems ;
86 b o o l e a n notch ;
87
88 // C o n f i g u r i n g s c r i p t s f o r ZDT
89 rows = 3 ;
90 columns = 2 ;
91 p r e f i x = new S t r i n g ( ”ZDT” ) ;
92 p r o b l e m s = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”ZDT6” } ;
93
94 exp . g e n e r a t e R B o x p l o t S c r i p t s ( rows , columns , problems , p r e f i x , notch = f a l s e , exp ) ;
95 exp . g e n e r a t e R W i l c o x o n S c r i p t s ( problems , p r e f i x , exp ) ;
96
97 // C o n f i g u r e s c r i p t s f o r DTLZ
98 rows = 3 ;
99 columns = 3 ;
100 p r e f i x = new S t r i n g ( ”DTLZ” ) ;
101 p r o b l e m s = new S t r i n g [ ] { ”DTLZ1” , ”DTLZ2” , ”DTLZ3” , ”DTLZ4” , ”DTLZ5” ,
102 ”DTLZ6” , ”DTLZ7” } ;
103
4.6. EXPERIMENTS WHEN THE PARETO FRONTS OF THE PROBLEMS ARE UNKNOWN 49
37 ...
38 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws JMException , IOException {
39 StandardStudy exp = new StandardStudy ( ) ;
40
41 exp . experimentName = ” StandardStudy ” ;
42 exp . a l g o r i t h m N a m e L i s t = new S t r i n g [ ] {
43 ”NSGAII” , ”SPEA2” , ”MOCell” , ”SMPSO” , ”GDE3” } ;
44 exp . p r o b l e m L i s t = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”ZDT6” ,
45 ”WFG1” , ”WFG2” , ”WFG3” , ”WFG4” , ”WFG5” , ”WFG6” ,
46 ”WFG7” , ”WFG8” , ”WFG9” ,
47 ”DTLZ1” , ”DTLZ2” , ”DTLZ3” , ”DTLZ4” , ”DTLZ5” ,
48 ”DTLZ6” , ”DTLZ7” } ;
49 exp . p a r e t o F r o n t F i l e = new S t r i n g [ 1 8 ] ; // Space a l l o c a t i o n f o r 18 f r o n t s
50 ...
51 exp . p a r e t o F r o n t D i r e c t o r y = ” ” ; // This d i r e c t o r y must be empty
52 ...
As it can be seen, the QualityIndicator object is applied to the population returned by the algo-
rithm. This way, the program returns the values of the desired quality indicators of the obtained solution
set.
Another example of using QualityIndicator objects was introduced in Section 3.3.1, where the use
of the hypervolume inside NSGA-II to measure the convergence speed of the algorithm was detailed.
Parallel Algorithms
Since version 4.3, jMetal provides a basic support for developing parallel metaheuristics. This chap-
ter describes our first approximation to this issue, which is currently focused in allowing the parallel
evaluation of solutions taking advantage of the multicore feature of modern processors.
The code of IParallelEvaluator is included in Listing 5.1, and contains four methods:
51
52 CHAPTER 5. PARALLEL ALGORITHMS
• parallelEvaluation(): all the solutions in the internal list are evaluated in parallel, and a list
containing them is returned.
The IParallelEvaluator interface allows many possible implementations. In jMetal 4.3 we provide
the MultithreadedEvaluator class, which is designed to make use of the processors/cores which are
available on most computers nowadays. The constructor of this class is detailed in Listing 5.2. It takes
as argument an integer value indicating the desired number of threads to be used. If this argument takes
the value 0 then the number of processors of the system is used, according to the value returned by the
Java Runtime.getRuntime().availableProcessors() method.
1 // M u l t i t h r e a d e d E v a l u a t o r . j a v a
2 ...
3 package j m e t a l . u t i l . p a r a l l e l ;
4 ...
5 p u b l i c c l a s s M u l t i t h r e a d e d E v a l u a t o r implements I P a r a l l e l E v a l u a t o r {
6 ...
7 /∗ ∗
8 ∗ Constructor
9 ∗ @param t h r e a d s
10 ∗/
11 public MultithreadedEvaluator ( int threads ) {
12 numberOfThreads = t h r e a d s ;
13 i f ( t h r e a d s == 0 )
14 numberOfThreads = Runtime . getRuntime ( ) . a v a i l a b l e P r o c e s s o r s ( ) ;
15 e l s e i f ( threads < 0) {
16 C o n f i g u r a t i o n . l o g g e r . s e v e r e ( ” M u l t i t h r e a d e d E v a l u a t o r : t h e number o f t h r e a d s ” +
17 ” c an no t be n e g a t i v e number ” + t h r e a d s ) ;
18 }
19 else {
20 numberOfThreads = t h r e a d s ;
21 }
22 }
23
24 public v o i d s t a r t E v a l u a t o r ( Problem problem ) { . . .
25 public void addSolutionForEvaluation ( Solution s o l u t i o n ) { . . .
26 public L i s t <S o l u t i o n > p a r a l l e l E v a l u a t i o n ( ) { . . .
27 public void stopEvaluator ( ) { . . .
28 }
7 i n t t h r e a d s = 4 ; // 0 − u s e a l l t h e a v a i l a b l e c o r e s
8 I P a r a l l e l E v a l u a t o r p a r a l l e l E v a l u a t o r = new M u l t i t h r e a d e d E v a l u a t o r ( t h r e a d s ) ;
9
10 a l g o r i t h m = new pNSGAII ( problem , p a r a l l e l E v a l u a t o r ) ;
11 ...
12 }
13 }
The pNSGAII class contains a state variable to reference the parallel evaluator, as shown in line 7 in
Listing 5.4. It is initialized in the class constructor (line 17).
1 // pNSGAII . j a v a
2 ...
3 package j m e t a l . m e t a h e u r i s t i c s . n s g a I I ;
4 ...
5 p u b l i c c l a s s pNSGAII e x t e n d s Algorithm {
6
7 IParallelEvaluator parallelEvaluator ;
8
9 /∗ ∗
10 ∗ Constructor
11 ∗ @param problem Problem t o s o l v e
12 ∗ @param e v a l u a t o r P a r a l l e l e v a l u a t o r
13 ∗/
14 p u b l i c pNSGAII ( Problem problem , I P a r a l l e l E v a l u a t o r e v a l u a t o r ) {
15 s u p e r ( problem ) ;
16
17 parallelEvaluator = evaluator ;
18 } // pNSGAII
19 ...
20 p u b l i c S o l u t i o n S e t e x e c u t e ( ) throws JMException , ClassNotFoundException {
21 ...
The parallel evaluator is started in line 22 in the code included in Listing 5.5. The method startEvaluator()
takes as parameter the problem being solved, which is necessary for the further evaluation of the solu-
tions. The initial population is initialized in theee steps. First, in the loop starting in line 26, every
new instantiated solution (line 27) is sent to the evaluator (line 28); second, the parallelEvaluation()
method of the parallel evaluator is invoked (line 31); finally, the evaluated solutions are inserted into de
population (lines 32-35).
19 ...
20 p u b l i c S o l u t i o n S e t e x e c u t e ( ) throws JMException , ClassNotFoundException {
21 ...
22 p a r a l l e l E v a l u a t o r . s t a r t E v a l u a t o r ( problem ) ;
23 ...
24 // C r e a t e t h e i n i t i a l s o l u t i o n S e t
25 Solution newSolution ;
26 f o r ( i n t i = 0 ; i < p o p u l a t i o n S i z e ; i ++) {
27 n e w S o l u t i o n = new S o l u t i o n ( p r o b l e m ) ;
28 p a r a l l e l E v a l u a t o r . addSolutionForEvaluation ( newSolution ) ;
29 }
30
31 L i s t <S o l u t i o n > s o l u t i o n L i s t = p a r a l l e l E v a l u a t o r . p a r a l l e l E v a l u a t i o n ( ) ;
32 for ( Solution solution : solutionList ) {
33 p o p u l a t i o n . add ( s o l u t i o n ) ;
34 e v a l u a t i o n s ++ ;
35 }
36 ...
Table 5.1: Solving ZDT1 with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds).
The same scheme is applied to evaluate in parallel the solutions created after applying the crossover
and mutation operators, as it can be observed in the piece of code included in Listing 5.6.
36 ...
37 // G e n e r a t i o n s
38 w h i l e ( e v a l u a t i o n s < maxEvaluations ) {
39 // C r e a t e t h e o f f S p r i n g s o l u t i o n S e t
40 o f f s p r i n g P o p u l a t i o n = new S o l u t i o n S e t ( p o p u l a t i o n S i z e ) ;
41 S o l u t i o n [ ] p a r e n t s = new S o l u t i o n [ 2 ] ;
42 f o r ( i n t i = 0 ; i < ( p o p u l a t i o n S i z e / 2 ) ; i ++) {
43 i f ( e v a l u a t i o n s < maxEvaluations ) {
44 // o b t a i n p a r e n t s
45 parents [ 0 ] = ( Solution ) selectionOperator . execute ( population ) ;
46 parents [ 1 ] = ( Solution ) selectionOperator . execute ( population ) ;
47 Solution [ ] offSpring = ( Solution [ ] ) crossoverOperator . execute ( parents ) ;
48 mutationOperator . execute ( o f f S p r i n g [ 0 ] ) ;
49 mutationOperator . execute ( o f f S p r i n g [ 1 ] ) ;
50 p a r a l l e l E v a l u a t o r . addSolutionForEvaluation ( offSpring [ 0 ] ) ;
51 p a r a l l e l E v a l u a t o r . addSolutionForEvaluation ( offSpring [ 1 ] ) ;
52 } // i f
53 } // f o r
54
55 L i s t <S o l u t i o n > s o l u t i o n s = p a r a l l e l E v a l u a t o r . p a r a l l e l E v a l u a t i o n ( ) ;
56
57 for ( Solution solution : solutions ) {
58 o f f s p r i n g P o p u l a t i o n . add ( s o l u t i o n ) ;
59 e v a l u a t i o n s ++;
60 }
61 ...
Listing 5.6: pNSGAII class. Evaluating solutions in parallel in the main loop of NSGA-II
Besides pNSGAII, a parallel version of the SMPSO algorithm, named pSMPSO (included in the
jmetal.metaheuristics.smpso package), is provided in jMetal 4.3.
Table 5.2: Solving ZDT1b with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds). ZDT1b is a the problem as ZDT1 but including a idle loop in the evaluation function to
increase its computing time.
Next we increase computing time to evaluate solutions to ZDT1 by adding the next loop into the
evaluation function:
1 f o r ( l o n g i = 0 ; i < 1 0 0 0 0 0 0 0 ; i ++) ;
We called this problem ZDT1b, and the computing times of the NSGA-II variants are included in
Table 5.2. Now, NSGA-II requires 86.5 seconds to perform 25,000 evaluations, while the multithreaded
versions from 8 threads take roughly 24 seconds, which means a speed-up of 3.6 (i.e., an efficiency of
0.45). The reason to not achieving a higher speed-up is that we are evaluating the individuals in parallel,
but the ranking and crowding procedures are carried out sequentially.
56 CHAPTER 5. PARALLEL ALGORITHMS
Chapter 6
How-to’s
This chapter is devoted to containing answers to some questions that may emerge when working with
jMetal.
57
58 CHAPTER 6. HOW-TO’S
30 // D e f a u l t s e t t i n g s
31 p o p u l a t i o n S i z e = 100 ;
32 m a x E v a l u a t i o n s = 25000 ;
33
34 mutationProbability = 1 . 0 / p r o b l e m . getNumberOfBits ( ) ;
35 crossoverProbability = 0.9 ;
36 } // N S G A I I B i n a r y S e t t i n g s
37 ...
In the configure() method, we choose the single point and bit-flip crossover and mutation operators,
respectively (lines 62 -68):
38 /∗ ∗
39 ∗ C o n f i g u r e NSGAII with u s e r −d e f i n e d p a r a m e t e r s e t t i n g s
40 ∗ @return A NSGAII a l g o r i t h m o b j e c t
41 ∗ @throws j m e t a l . u t i l . JMException
42 ∗/
43 p u b l i c Algorithm c o n f i g u r e ( ) throws JMException {
44 Algorithm a l g o r i t h m ;
45 Operator selection ;
46 Operator crossover ;
47 O p e r a t o r mutation ;
48
49 QualityIndicator indicators ;
50
51 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
52
53 // C r e a t i n g t h e problem
54 a l g o r i t h m = new NSGAII ( p r o b l e m ) ;
55
56 // Algorithm p a r a m e t e r s
57 algorithm . setInputParameter ( ” populationSize ” , populationSize ) ;
58 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , m a x E v a l u a t i o n s ) ;
59
60
61 // Mutation and C r o s s o v e r Binary c o d i f i c a t i o n
62 p a r a m e t e r s = new HashMap ( ) ;
63 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 0 . 9 ) ;
64 c r o s s o v e r = CrossoverFactory . getCrossoverOperator ( ” SinglePointCrossover ” , parameters
);
65
66 p a r a m e t e r s = new HashMap ( ) ;
67 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 1 . 0 / p r o b l e m . getNumberOfBits ( ) ) ;
68 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” B i t F l i p M u t a t i o n ” , p a r a m e t e r s ) ;
69
70 // S e l e c t i o n O p e r a t o r
71 parameters = n u l l ;
72 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
73
74 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
75 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
76 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
77 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
78
79 return algorithm ;
80 } // c o n f i g u r e
If we want to use NSGA-II to solve the ZDT1 problem by using a binary coded real representation
we simply need to execute this command: java jmetal.experiments.Main NSGAIIBinary ZDT4. If
the problem only allows a binary encoding (e.g., the ZDT5 problem), then the line 22 must be modified
as follows:
Object [] problemParams = "Binary".
6.2. HOW TO USE PERMUTATION REPRESENTATIONS IN JMETAL 59
In the configure() method, we choose the PMX and bit-flip crossover and mutation operators,
respectively (lines 62 -68):
38 /∗ ∗
39 ∗ C o n f i g u r e NSGAII with u s e r −d e f i n e d p a r a m e t e r s e t t i n g s
40 ∗ @return A NSGAII a l g o r i t h m o b j e c t
41 ∗ @throws j m e t a l . u t i l . JMException
42 ∗/
43 p u b l i c Algorithm c o n f i g u r e ( ) throws JMException {
44 Algorithm a l g o r i t h m ;
45 Operator selection ;
46 Operator crossover ;
47 O p e r a t o r mutation ;
48
49 QualityIndicator indicators ;
50
51 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
52
53 // C r e a t i n g t h e problem
54 a l g o r i t h m = new NSGAII ( p r o b l e m ) ;
55
56 // Algorithm p a r a m e t e r s
60 CHAPTER 6. HOW-TO’S
8 ∗/
9 p ub l ic c l a s s IntRealSolutionType extends SolutionType {
10 private int intVariables ;
11 private int realVariables ;
12
13 /∗ ∗
14 ∗ Constructor
15 ∗/
16 p u b l i c I n t R e a l S o l u t i o n T y p e ( Problem problem , i n t i n t V a r i a b l e s , i n t r e a l V a r i a b l e s )
throws ClassNotFoundException {
17 s u p e r ( problem ) ;
18 intVariables = intVariables ;
19 realVariables = realVariables ;
20 } // C o n s t r u c t o r
21
22 /∗ ∗
23 ∗ Creates the v a r i a b l e s of the s o l u t i o n
24 ∗ @param d e c i s i o n V a r i a b l e s
25 ∗ @throws ClassNotFoundException
26 ∗/
27 p u b l i c V a r i a b l e [ ] c r e a t e V a r i a b l e s ( ) throws ClassNotFoundException {
28 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ p r o b l e m . ge tN um b er Of Va ri ab le s ( ) ] ;
29
30 f o r ( i n t v a r = 0 ; v a r < i n t V a r i a b l e s ; v a r++)
31 v a r i a b l e s [ v a r ] = new I n t ( ( i n t ) p r o b l e m . g e t L o w e r L i m i t ( v a r ) , ( i n t ) p r o b l e m .
getUpperLimit ( var ) ) ;
32
33 f o r ( i n t v a r = i n t V a r i a b l e s ; v a r < ( i n t V a r i a b l e s + r e a l V a r i a b l e s ) ; v a r++)
34 v a r i a b l e s [ v a r ] = new Real ( p r o b l e m . g e t L o w e r L i m i t ( v a r ) , p r o b l e m . g e t U p p e r L i m i t (
var ) ) ;
35
36 return variables ;
37 } // c r e a t e V a r i a b l e s
38 } // I n t R e a l S o l u t i o n T y p e
In jMetal 4.0 we provide two solution types having mixed variables IntRealSolutionType and
ArrayRealAndBinarySolutionType; the first one contains integer and real variables, and the sec-
ond one represents solutions having an array of real values plus a binary string. The code of the
IntRealSolutionType is included in Listing 6.1. We can observe that the number of integers and real
variables is indicated in the class constructor (lines 16-20), and when the createVariables() method is
called, the required variables are created. The ArrayRealAndBinarySolutionType class is even simpler
(see 6.2).
1 // ArrayRealAndBinarySolutionType . j a v a
2 ...
3 package j m e t a l . e n c o d i n g s . s o l u t i o n T y p e ;
4 ...
5 /∗ ∗
6 ∗ C l a s s r e p r e s e n t i n g t h e s o l u t i o n t y p e o f s o l u t i o n s composed o f a r r a y o f r e a l s
7 ∗ and a b i n a r y s t r i n g .
8 ∗ ASSUMPTIONs :
9 ∗ − The n u m b e r O f V a r i a b l e s f i e l d i n c l a s s Problem must c o n t a i n t h e number
10 ∗ o f r e a l v a r i a b l e s . This f i e l d i s used t o a p p l y r e a l o p e r a t o r s ( e . g . ,
11 ∗ mutation p r o b a b i l i t y )
12 ∗ − The u p p e r L i m i t and l o w e r L i m i t a r r a y s must have t h e l e n g t h i n d i c a t e d
13 ∗ by n u m b e r O f V a r i a b l e s .
14 ∗/
15 p u b l i c c l a s s ArrayRealAndBinarySolutionType e x t e n d s S o l u t i o n T y p e {
16 private int binaryStringLength ;
17 p r i v a t e i n t numberOfRealVariables ;
18 /∗ ∗
19 ∗ Constructor
62 CHAPTER 6. HOW-TO’S
20 ∗ @param problem
21 ∗ @param r e a l V a r i a b l e s Number o f r e a l v a r i a b l e s
22 ∗ @param b i n a r y S t r i n g L e n g t h Length o f t h e b i n a r y s t r i n g
23 ∗ @throws ClassNotFoundException
24 ∗/
25 p u b l i c ArrayRealAndBinarySolutionType ( Problem problem ,
26 int realVariables ,
27 int binaryStringLength )
28 throws ClassNotFoundException {
29 s u p e r ( problem ) ;
30 binaryStringLength = binaryStringLength ;
31 numberOfRealVariables = r e a l V a r i a b l e s ;
32 } // C o n s t r u c t o r
33
34 /∗ ∗
35 ∗ Creates the v a r i a b l e s of the s o l u t i o n
36 ∗ @param d e c i s i o n V a r i a b l e s
37 ∗ @throws ClassNotFoundException
38 ∗/
39 p u b l i c V a r i a b l e [ ] c r e a t e V a r i a b l e s ( ) throws ClassNotFoundException {
40 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ 2 ] ;
41
42 v a r i a b l e s [ 0 ] = new ArrayReal ( n u m b e r O f R e a l V a r i a b l e s , p r o b l e m ) ;
43 v a r i a b l e s [ 1 ] = new Binary ( b i n a r y S t r i n g L e n g t h ) ;
44 return variables ;
45 } // c r e a t e V a r i a b l e s
46 } // ArrayRealAndBinarySolutionType
As any other solution type, the key point is that we can define operators to be applied to them. As
we observed in the description of the SBX crossover (see Listing 3.5), we can specify in the operator the
list of valid types which it can be applied to. in jMetal 4.0 we supply two operators to this solution type:
• SBXSinglePointCrossover: applies a SBX crossover to the real variables and a single point
crossover to the binary part.
• PolynomialBitFlipMutation: the real part of the solution is mutated with a polynomial mutation
and a bit flip is applied to the binary string.
If we take a look to the implementation of these two operator we can observe that they do not differ
from any other operator, as the SBX crossover detailed in Section 3.1.2.
The solution types and operators cited in this section can be used as templates to develop your own
solution types and associated operators if they are not available in jMetal. If you do so and think that
your new classes can be useful to other researchers, please feel free to contact us to include them in
jMetal.
If we run the utility without any parameters we get the following messages:
6.6. HOW TO USE THE WFG HYPERVOLUME ALGORITHM 63
Thus, to select the non-dominated solutions from the previously file we have to execute the utility
as follows (we assume that the number of objectives of the problem is 2):
1 % java jmetal . u t i l . ExtractParetoFront f r o n t 2
• No normalization is performed to accelerate the Hypervolume calculus. This must be taken into
account if this implementation is going to be used instead of Zitzler’s Hypervolume contained in
the jmetal.qualityIndicator.Hypervolume class.
• The bi-objective has been optimized when calculating the Hypervolume contribution of the solu-
tions in a SolutionSet. To give an idea of the performance improvements, when solving the ZDT1
problem with SMS-EMOA in our development laptop the times are reduced from 47s to 3.6s.
• When the number of objectives is higher than two then the WFG algorithm is used. We have to
note that although the WFG Hypervolume is considered to be very fast beyond 5 objectives, it
is still very slow to be used in algorithms such as SMPSOhv when trying to solve many-objective
optimization problems.
#NSGAII.conf file
populationSize=100
maxEvaluations=25000
#mutationProbability=.6
crossoverProbability=0.9
mutationDistributionIndex=20.0
crossoverDistributionIndex=20.0
The parameters than can be set are those indicated in the corresponding Foo Settings class (e.g.,
NSGAII Settings). There is no need to indicate the values of all the parameters; if some of them are
missing in the file or the line containing them starts by # the default value in the Settings class will be
used.
To read this file, the way of calling a metaheuristic is through the jmetal.experiments.MainC class.
After putting the configuration file in the working directory, the MainC class can be invoked as follows:
What about’s
This chapter contains answers to some questions which have not been dealt with before in the manual.
65
66 CHAPTER 7. WHAT ABOUT’S
of N real values. In jMetal 3.0 we incorporated two optimization types based on this idea: ArrayReal
and ArrayInt.
Using optimization types brings some difficulties that have to be solved. Thus, we have now the of
using a set of N decision variables, or one decision variable composed of an array of N values, which
affects the way variable types are initialized and used. We have solved these problems by using wrapper
objects, which are included in jmetal.util.wrapper; in particular, we will show next how to use the
XReal wrapper.
Let us start by showing the class implementing the Schaffer problem:
1 // S c h a f f e r . j a v a
2 ...
3 package j m e t a l . p r o b l e m s ;
4 ...
5 /∗ ∗
6 ∗ C l a s s r e p r e s e n t i n g problem S c h a f f e r
7 ∗/
8 p u b l i c c l a s s S c h a f f e r e x t e n d s Problem {
9
10 /∗ ∗
11 ∗ Constructor .
12 ∗ C r e a t e s a d e f a u l t i n s t a n c e o f problem S c h a f f e r
13 ∗ @param s o l u t i o n T y p e The s o l u t i o n t y p e must ” Real ” o r ” B i n a r y R e a l ” .
14 ∗/
15 p u b l i c S c h a f f e r ( S t r i n g s o l u t i o n T y p e ) throws ClassNotFoundException {
16 numberOfVariables = 1;
17 numberOfObjectives = 2;
18 numberOfConstraints = 0 ;
19 problemName = ” Schaffer ” ;
20
21 l o w e r L i m i t = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
22 u p p e r L i m i t = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
23 l o w e r L i m i t [ 0 ] = −100000;
24 upperLimit [ 0 ] = 100000;
25
26 i f ( s o l u t i o n T y p e . compareTo ( ” B i n a r y R e a l ” ) == 0 )
27 s o l u t i o n T y p e = new B i n a r y R e a l S o l u t i o n T y p e ( t h i s ) ;
28 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” Real ” ) == 0 )
29 s o l u t i o n T y p e = new R e a l S o l u t i o n T y p e ( t h i s ) ;
30 else {
31 System . out . p r i n t l n ( ” E r r o r : s o l u t i o n t y p e ” + s o l u t i o n T y p e + ” i n v a l i d ” ) ;
32 System . e x i t ( −1) ;
33 }
34 } // S c h a f f e r
35
36
37 /∗ ∗
38 ∗ Evaluates a s o l u t i o n
39 ∗ @param s o l u t i o n The s o l u t i o n t o e v a l u a t e
40 ∗ @throws JMException
41 ∗/
42 p u b l i c v o i d e v a l u a t e ( S o l u t i o n s o l u t i o n ) throws JMException {
43 Variable [ ] variable = solution . getDecisionVariables () ;
44
45 d o u b l e [ ] f = new d o u b l e [ n u m b e r O f O b j e c t i v e s ] ;
46 f [ 0 ] = v a r i a b l e [ 0 ] . getValue ( ) ∗ v a r i a b l e [ 0 ] . getValue ( ) ;
47
48 f [ 1 ] = ( v a r i a b l e [ 0 ] . getValue ( ) − 2 . 0 ) ∗
49 ( v a r i a b l e [ 0 ] . getValue ( ) − 2 . 0 ) ;
50
51 solution . setObjective (0 , f [ 0 ] ) ;
52 solution . setObjective (1 , f [ 1 ] ) ;
53 } // e v a l u a t e
54 } // S c h a f f e r
7.2. WHAT ABOUT OPTIMIZED VARIABLES AND SOLUTION TYPES? 67
The class constructor contains at the end a group of sentences indicating the allowd solution types that
can be used to solve the problem (BinaryRealSolutionType and RealSolutionType. The evaluate()
method directly accesses the variables to evaluate the solutions. Schaffer’s problem is an example of
problem that do not need to use optimized types, given that it has only a variable.
Let us consider now problems which can have many variables: some examples are the ZDT, DTLZ,
WFG benchmark problems, and Kursawe’s problem. We use this last one as an example. Its constructor
is included next:
1 p u b l i c Kursawe ( S t r i n g s o l u t i o n T y p e , I n t e g e r n u m b e r O f V a r i a b l e s ) throws
ClassNotFoundException {
2 numberOfVariables = numberOfVariables . intValue ( ) ;
3 numberOfObjectives = 2 ;
4 numberOfConstraints = 0 ;
5 problemName = ” Kursawe ” ;
6
7 upperLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
8 lowerLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
9
10 f o r ( i n t i = 0 ; i < n u m b e r O f V a r i a b l e s ; i ++) {
11 l o w e r L i m i t [ i ] = −5.0 ;
12 upperLimit [ i ] = 5.0 ;
13 } // f o r
14
15 i f ( s o l u t i o n T y p e . compareTo ( ” B i n a r y R e a l ” ) == 0 )
16 s o l u t i o n T y p e = new B i n a r y R e a l S o l u t i o n T y p e ( t h i s ) ;
17 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” Real ” ) == 0 )
18 s o l u t i o n T y p e = new R e a l S o l u t i o n T y p e ( t h i s ) ;
19 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” ArrayReal ” ) == 0 )
20 s o l u t i o n T y p e = new A r r a y R e a l S o l u t i o n T y p e ( t h i s ) ;
21 else {
22 System . out . p r i n t l n ( ” E r r o r : s o l u t i o n t y p e ” + s o l u t i o n T y p e + ” i n v a l i d ” ) ;
23 System . e x i t ( −1) ;
24 }
25 } // Kursawe
We can observe that at the end of the constructor, we have added the ArrayRealSolutionType as a
third choice of solution representation to represent the problem. The point now is that accessing directly
the decision variables of the problem is cumbersome, because we must distinguish what kind of solution
type we are used. The use of the XReal wrapper simplifies this task, as we can see in the evaluate()
method:
1 p u b l i c v o i d e v a l u a t e ( S o l u t i o n s o l u t i o n ) throws JMException {
2 XReal v a r s = new XReal ( s o l u t i o n ) ;
3
4 d o u b l e aux , x i , x j ; // a u x i l i a r y v a r i a b l e s
5 d o u b l e [ ] f x = new d o u b l e [ 2 ] ; // f u n c t i o n v a l u e s
6 d o u b l e [ ] x = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
7 f o r ( i n t i = 0 ; i < n u m b e r O f V a r i a b l e s ; i ++)
8 x [ i ] = vars . getValue ( i ) ;
9
10 fx [ 0 ] = 0.0 ;
11 f o r ( i n t var = 0 ; var < numberOfVariables − 1 ; v a r++) {
12 x i = x [ var ] ∗ x [ var ] ;
13 x j = x [ v a r +1] ∗ x [ v a r +1] ;
14 aux = ( − 0 . 2 ) ∗ Math . s q r t ( x i + x j ) ;
15 f x [ 0 ] += ( − 1 0 . 0 ) ∗ Math . exp ( aux ) ;
16 } // f o r
17
18 fx [ 1 ] = 0 . 0 ;
19
20 f o r ( i n t v a r = 0 ; v a r < n u m b e r O f V a r i a b l e s ; v a r++) {
21 f x [ 1 ] += Math . pow ( Math . abs ( x [ v a r ] ) , 0 . 8 ) +
22 5 . 0 ∗ Math . s i n ( Math . pow ( x [ v a r ] , 3 . 0 ) ) ;
23 } // f o r
68 CHAPTER 7. WHAT ABOUT’S
24
25 solution . setObjective (0 , fx [ 0 ] ) ;
26 solution . setObjective (1 , fx [ 1 ] ) ;
27 } // e v a l u a t e
Now, the wrapper encapsulates the access to the solutions, by using the getValue(index) method.
We must note that using the XReal wrapper implies that all the operators working with real values must
use it too (e.g., the real crossover and mutation operators). Attention must be paid when requesting
information about parameters of the problems, as the number of variables. This information is obtained
typically by invoking the getNumberOfVariables() on the problem to be solved, which in turn returns
the value of the state variable numberOfVariables . However, while this works properly when using
RealSolutionType, that method returns a value of 1 when using ArrayRealSolutionType. Let us recall
that we are replacing N variables by one variable composed of an array of size N. To avoid this issue,
the ArrayRealSolutionType() method of class XReal must be used.
To give an idea of the kind of benetifs of using the optimized type ArrayReal, we have executed
NSGA-II to solve the ZDT1 problem with 1000 and 5000 variables (the default value is 30). The target
computer is a MacBook with 2GHz Intel Core 2 Duo, 4GB 1067 MHZ DDR3 RAM, running Snow
Leopard; the version of the JDK is 1.6.0 17. The computing times of the algorithm when using the
RealSolutionType and ArrayRealSolutionType solutions types when solving the problem with 1000
variables are 12.5s and 11.4s, respectively; in the case of the problem with 5000 variables, the times are
90s and 69s, respectively.
On the other hand, if we configure ZDT1 with 10,000 variables, the program fails reporting an out-of-
memory error when using RealSolutionType, while it runs properly when using the optimized type. The
error memory can be fixed easily by including the proper flags when launching the Java virtual machine
(e.g., java -Xmx512M java.experiments.Main NSGAII ZDT1), but this is an example illustrating that
the memory savings resulting of using an optimized type can be significant.
Chapter 8
This manual starts with jMetal 2.0, released on December 2008. We detail next the release notes, new
features, and changes in the manual in the current release, jMetal 4.5, from the previous versions.
New features
• Support for configuring metaheuristics parameter settings from properties files (see Section 6.7).
• The FastHypervolume class, which implements the WFG algorithm for computing the Hypervol-
ume quality indicator (Section 6.6). This class includes an optimized algorithm for calculating the
Hypervolume for problems having two objectives.
• New versions of the SMPSOhv and SMS-EMOA (named FastSMSEMOA) metaheuristics, which take
advantage of the FastHypervolume class when solving bi-objective problems.
• The AbySS algorithm has been adapted to use ArrayReal encoding. This has implied to modify
the Distance class.
Bugs
• Fixed a bug in method Solution.indexBest().
69
70 CHAPTER 8. VERSIONS AND RELEASE NOTES
New features
• New algorithm: dMOPSO [39].
• New algorithm: SMPSOhv, a variant of SMPSO based on using the hypervolumen indicator [24].
• New algorithm: cMOEAD, a variant of MOEAD adopting a constrain handling scheme based
on [1].
• New Settings classes have been added (cMOEAD Settings, dMOPSO Settings, MOCHC Settings,
NSGAIIPermutation Settings, SMPSOhv Settings).
Bugs
• Fixed a bug in class SolutionComparator.
New features
• New package jmetal.util.parallel, including the IParallelEvaluator interface and the MultithreadedEvaluato
class.
• Two new algorithms: pNSGAII and pSMPSO, including the pNSGAII main, pNSGAII Settings,
pSMPSO main, and pSMPSO Setting classes.
• Two new problems: Bihn2 and FourBarTruss.
New features
• The MOTSP problem, a multi-objective version of the TSP.
• The NSGAII MOTSP main class, aimed to solve the MOTSP problem.
• A new random number generator based on the Mersenne Twister (see Subsection 6.3)4 .
• The Experiment class can obtain reference fronts when they are not available in advance5 . See
Section 4.6.
• A new utility, called jmetal.util.ExtractParetoFront, has been added to extract the non-
dominated solutions front a file containing both dominated and non-dominated solutions. See
Section 6.5.
Bugs
Some minor bugs have been fixed.
Performance improvements
The following improvements affecting performance has been included:
• The fast non-dominated sorting algorithm has been optimized6 .
• The thread model in the Experiment class has been changed to allow a better use of multicore
CPUs7 .
4 Contribution of Jean Laurent Hippolyte
5 Contribution of Jorge Rodrı́guez
6 Contribution of Guillaume Jacquenot
7 Contribution of Jorge Rodrı́guez
72 CHAPTER 8. VERSIONS AND RELEASE NOTES
New features
• The former jmetal.base package has been renamed as jmetal.core. The reason is to try to keep
the same package structure as in the C# version of jMetal we are developing8 ; base is a reserved
keyword in C#.
• The package structure has been modified. Now, the former package jmetal.base.operator be-
comes jmetal.operators, and jmetal.base.variable and jmetal.base.solutionType are now
jmetal.encodings.variable and jmetal.encodings.solutionType.
• New encoding: ArrayRealAndBinarySolutionType allows a representation combining a array of
reals and a binary string.
• Two new operators: PolinomialBitFlipMutation (Package: jmetal.operators.mutation) and
SBXSinglePointCrossver (Package: jmetal.operators.crossover), intended to be applied to
ArrayRealAndBinarySolutionType solutions.
• The operators can be configured now when their constructor is invoked (see Subsection 3.1.2, which
includes the example of the SBX crossover in Listing 3.3).
Removed features
The GUI is not available in this release.
Bugs
Many minor bugs have been fixed.
Performance improvements
Many sorting methods used in the framework have been optimized, and the fact that the new approach
to create operators is more efficient (previously, all the operator parameters where checked each time on
operator was invoked) has lead to significant performance improvements. To give an idea of the benefits
of using the new version in terms of computing time, we include two examples. The target computer is
a MacBook with 2GHz Intel Core 2 Duo, 4GB 1067 MHZ DDR3 RAM, running Mac OS X Lion 10.7.2
(11C74); the version of the JDK is 1.6.0 26.
In the first example, we run NSGA-II to solve problem Kursawe using typical settings (population
size: 100, number of evaluations: 25000). The times obtained when running the algorithms with version
3.1 and 4.0 of jMetal are about 2.6s and 1.8s, respectively. In the second example, we execute MOEAD
to solve the LZ09 F1 problem with standard settings (population size: 300, number of evaluations:
300000), getting times in the order of 14.7s and 3.3s , respectively. That is, the time reductions are
about the 30% and the 77% in the two examples, respectively.
8 http://jmetalnet.sourceforge.net
8.6. VERSION 3.1 (1ST OCTOBER 2010) 73
New features
• A new solution type: ArrayRealAndBinarySolutionType (Package: jmetal.base.solutionType).
Bugs
Bugs in the following packages and classes have been fixed:
• Class jmetal.base.operator.crossover.PMXCrossover
New features
• A new approach to define solution representations (Section 3.1.1).
• Two wrapper classes, XReal and XInt, to encapsulate the access to the different included repre-
sentations of real and integer types, respectively.
Bugs
Bugs in the following packages and classes have been fixed:
• Class jmetal.metaheuristics.moead.MOEAD
• Added Chapter 4
• Added Chapter 6
• Added Chapter 7
New features
• A random search algorithm (package: jmetal.metaheuristic.randomSearch).
• The experiments package allows to generate latex tables including the application of the Wilcoxon
statistical test to the results of jMetal experiments. Additionally, it can be indicated whether to
generate notched or not notched boxplots (Chapter 4).
• A first approximation to the use of threads to run experiments in parallel has been included
(Section 4.8).
Bugs
Bugs in the following packages have been fixed:
• jmetal.problems.ConstrEx.
• jmetal.metaheuristics.paes.Paes main.
• jmetal.metaheuristics.moead.MOEAD.
New features
• Class jmetal.experiments.Experiment: method generateRScripts().
• The IBEA algorithm [46] (package: jmetal.metaheuristic.ibea). This algorithm is included for
testing purposes; we have not validated the implementation yet.
Bugs
• A bug in the jmetal.base.operator.crossover.SinglePointCrossover class has been fixed.
New features
• Package jmetal.experiments.
Known Bugs
Additions and Changes to the Manual
9 http://www.r-project.org/
76 CHAPTER 8. VERSIONS AND RELEASE NOTES
Bibliography
[1] M. Asafuddoula, T. Ray, R. Sarker, and K. Alam. An adaptive constraint handling approach
embedded moea/d. In Evolutionary Computation (CEC), 2012 IEEE Congress on, pages 1–8,
2012.
[2] S. Bleuler, M. Laumanns, L. Thiele, and E. Zitzler. PISA — a platform and programming language
independent interface for search algorithms. In C. M. Fonseca, P. J. Fleming, E. Zitzler, K. Deb,
and L. Thiele, editors, Evolutionary Multi-Criterion Optimization (EMO 2003), Lecture Notes in
Computer Science, pages 494 – 508, Berlin, 2003. Springer.
[3] D.W. Corne, N.R. Jerram, J.D. Knowles, and M.J. Oates. PESA-II: Region-based selection in
evolutionary multiobjective optimization. In Genetic and Evolutionary Computation Conference
(GECCO-2001), pages 283–290. Morgan Kaufmann, 2001.
[4] K. Deb. Multi-objective optimization using evolutionary algorithms. John Wiley & Sons, 2001.
[5] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scalable test problems for evolutionary multiob-
jective optimization. In Ajith Abraham, Lakhmi Jain, and Robert Goldberg, editors, Evolutionary
Multiobjective Optimization. Theoretical Advances and Applications, pages 105–145. Springer, USA,
2005.
[6] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan. A fast and elitist multiobjective
genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
[7] J.J. Durillo, A.J. Nebro, and E. Alba. The jMetal framework for multi-objective optimization:
Design and architecture. In CEC 2010, pages 4138–4325, Barcelona, Spain, July 2010.
[8] J.J. Durillo, A.J. Nebro, F. Luna, and E. Alba. On the effect of the steady-state selection scheme
in multi-objective genetic algorithms. In 5th International Conference, EMO 2009, volume 5467 of
Lecture Notes in Computer Science, pages 183–197, Nantes, France, April 2009. Springer Berlin /
Heidelberg.
[9] J.J. Durillo, A.J. Nebro, F. Luna, B. Dorronsoro, and E. Alba. jMetal: a Java framework for de-
veloping multi-objective optimization metaheuristics. Technical Report ITI-2006-10, Departamento
de Lenguajes y Ciencias de la Computación, University of Málaga, E.T.S.I. Informática, Campus
de Teatinos, 2006.
[10] Juan J. Durillo and Antonio J. Nebro. jmetal: A java framework for multi-objective optimization.
Advances in Engineering Software, 42(10):760 – 771, 2011.
[11] Juan J. Durillo, Antonio J. Nebro, Francisco Luna, and Enrique Alba. Solving three-objective
optimization problems using a new hybrid cellular genetic algorithm. In G. Rudolph, T. Jensen,
S. Lucas, C. Poloni, and N. Beume, editors, Parallel Problem Solving from Nature - PPSN X, volume
5199 of Lecture Notes in Computer Science, pages 661–670. Springer, 2008.
77
78 BIBLIOGRAPHY
[12] M. Emmerich, N. Beume, and B. Naujoks. An emo algorithm using the hypervolume measure
as selection criterion. In C.A. Coello, A. Hernández, and E. Zitler, editors, Third International
Conference on Evolutionary MultiCriterion Optimization, EMO 2005, volume 3410 of LNCS, pages
62–76. Springer, 2005.
[13] H. Eskandari, C. D. Geiger, and G. B. Lamont. FastPGA: A dynamic population sizing approach for
solving expensive multiobjective optimization problems. In S. Obayashi, K. Deb, C. Poloni, T. Hi-
royasu, and T. Murata, editors, Evolutionary Multi-Criterion Optimization. 4th International Con-
ference, EMO 2007, volume 4403 of Lecture Notes in Computer Science, pages 141–155. Springer,
2007.
[14] C.M. Fonseca and P.J. Flemming. Multiobjective optimization and multiple constraint handling
with evolutionary algorithms - part ii: Application example. IEEE Transactions on System, Man,
and Cybernetics, 28:38–47, 1998.
[15] D. Greiner, J.M. Emperador, G. Winter, and B. Galván. Improving computational mechanics opti-
mum design using helper objectives: An application in frame bar structures. In S. Obayashi, K. Deb,
C. Poloni, T. Hiroyasu, and T. Murata, editors, Fourth International Conference on Evolutionary
MultiCriterion Optimization, EMO 2007, volume 4403 of Lecture Notes in Computer Science, pages
575–589, Berlin, Germany, 2006. Springer.
[16] S. Huband, P. Hingston, L. Barone, and L. While. A review of multiobjective test problems and
a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5):477–506,
October 2006.
[17] J. Knowles, L. Thiele, and E. Zitzler. A Tutorial on the Performance Assessment of Stochastic
Multiobjective Optimizers. Technical Report 214, Computer Engineering and Networks Laboratory
(TIK), ETH Zurich, 2006.
[18] J. D. Knowles and D. W. Corne. Approximating the nondominated front using the pareto archived
evolution strategy. Evolutionary Computation, 8(2):149–172, 2000.
[19] S. Kukkonen and J. Lampinen. GDE3: The third evolution step of generalized differential evolution.
In IEEE Congress on Evolutionary Computation (CEC’2005), pages 443 – 450, 2005.
[20] A. Kurpati, S. Azarm, and J. Wu. Constraint handling improvements for multi-objective genetic
algorithms. Structural and Multidisciplinary Optimization, 23(3):204–213, 2002.
[21] F. Kursawe. A variant of evolution strategies for vector optimization. In H.P. Schwefel and
R. Männer, editors, Parallel Problem Solving for Nature, pages 193–197, Berlin, Germany, 1990.
Springer-Verlag.
[22] H. Li and Q. Zhang. Multiobjective optimization problems with complicated pareto sets, moea/d
and nsga-ii. IEEE Transactions on Evolutionary Computation, 12(2):284–302, April 2009.
[23] A.J. Nebro, E. Alba, G. Molina, F. Chicano, F. Luna, and J.J. Durillo. Optimal antenna placement
using a new multi-objective chc algorithm. In GECCO ’07: Proceedings of the 9th annual conference
on Genetic and evolutionary computation, pages 876–883, New York, NY, USA, 2007. ACM Press.
[24] A.J. Nebro, J.J. Durillo, and C.A. Coello Coello. Analysis of leader selection strategies in a multi-
objective particle swarm optimizer. In Evolutionary Computation (CEC), 2013 IEEE Congress on,
pages 3153–3160, 2013.
[25] A.J. Nebro, J.J. Durillo, J. Garcı́a-Nieto, C.A. Coello Coello, F. Luna, and E. Alba. Smpso:
A new pso-based metaheuristic for multi-objective optimization. In 2009 IEEE Symposium on
Computational Intelligence in Multicriteria Decision-Making (MCDM 2009), pages 66–73. IEEE
Press, 2009.
BIBLIOGRAPHY 79
[26] A.J. Nebro, J.J. Durillo, F. Luna, B. Dorronsoro, and E. Alba. Design issues in a multiobjective
cellular genetic algorithm. In S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, editors,
Evolutionary Multi-Criterion Optimization. 4th International Conference, EMO 2007, volume 4403
of Lecture Notes in Computer Science, pages 126–140. Springer, 2007.
[27] AJ Nebro, JJ Durillo, F Luna, B Dorronsoro, and E Alba. Mocell: A cellular genetic algorithm for
multiobjective optimization. Int. J. Intell. Syst., 24(7):726–746, 2009.
[28] A.J. Nebro, J.J. Durillo, C.A. Coello Coello M. Machı́n, and B. Dorronsoro. A study of the combi-
nation of variation operators in the nsga-ii algorithm. In 15th Conference of the Spanish Association
for Artificial Intelligence, CAEPIA 2013, Proceedings of, pages 269–278, 2013.
[29] Antonio J. Nebro, Juan J. Durillo, C.A. Coello Coello, Francisco Luna, and Enrique Alba. Design
issues in a study of convergence speed in multi-objective metaheuristics. In G. Rudolph, T. Jensen,
S. Lucas, C. Poloni, and N. Beume, editors, Parallel Problem Solving from Nature - PPSN X, volume
5199 of Lecture Notes in Computer Science, pages 763–772. Springer, 2008.
[30] Antonio J. Nebro, Francisco Luna, Enrique Alba, Bernabé Dorronsoro, Juan J. Durillo, and Andreas
Beham. AbYSS: Adapting Scatter Search to Multiobjective Optimization. IEEE Transactions on
Evolutionary Computation, 12(4), August 2008.
[31] A. Osyczka and S. Kundo. A new method to solve generalized multicriteria optimization problems
using a simple genetic algorithm. Structural Optimization, 10:94–99, 1995.
[32] T. Ray, K. Tai, and K.C. Seow. An Evolutionary Algorithm for Multiobjective Optimization.
Engineering Optimization, 33(3):399–424, 2001.
[33] M. Reyes and C.A. Coello Coello. Improving PSO-based multi-objective optimization using crowd-
ing, mutation and -dominance. In C.A. Coello, A. Hernández, and E. Zitler, editors, Third In-
ternational Conference on Evolutionary MultiCriterion Optimization, EMO 2005, volume 3410 of
LNCS, pages 509–519. Springer, 2005.
[34] J.D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. In J.J.
Grefensttete, editor, First International Conference on Genetic Algorithms, pages 93–100, Hillsdale,
NJ, 1987.
[35] N. Srinivas and K. Deb. Multiobjective function optimization using nondominated sorting genetic
algorithms. Evolutionary Computation, 2(3):221–248, 1995.
[36] M. Tanaka, H. Watanabe, Y. Furukawa, and T. Tanino. Ga-based decision support system for
multicriteria optimization. In Proceedings of the IEEE International Conference on Systems, Man,
and Cybernetics, volume 2, pages 1556–1561, 1995.
[38] L. While, L. Bradstreet, and L. Barone. A fast way of calculating exact hypervolumes. Evolutionary
Computation, IEEE Transactions on, 16(1):86–95, 2012.
[39] S. Zapotecas and C A. Coello Coello. A multi-objective particle swarm optimizer based on decom-
position. In GECCO, pages 69–76, 2011.
[40] Q. Zhang, A. Zhou, S. Z. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari. Multiobjective optimiza-
tion test instances for the cec 2009 special session and competition. Technical Report CES-487,
University of Essex and Nanyang Technological University, Essex, UK and Singapore, September
2008.
80 BIBLIOGRAPHY
[41] A. Zhou, Y. Jin, Q. Zhang, B. Sendhoff, and E. Tsang. Combining model-based and genetics-based
offspring generation for multi-objective optimization using a convergence criterion. In 2006 IEEE
Congress on Evolutionary Computation, pages 3234–3241, 2006.
[42] E. Zitzler, K. Deb, and L. Thiele. Comparison of multiobjective evolutionary algorithms: Empirical
results. Evolutionary Computation, 8(2):173–195, Summer 2000.
[43] E. Zitzler, M. Laumanns, and L. Thiele. SPEA2: Improving the strength pareto evolutionary
algorithm. In K. Giannakoglou, D. Tsahalis, J. Periaux, P. Papailou, and T. Fogarty, editors,
EUROGEN 2001. Evolutionary Methods for Design, Optimization and Control with Applications to
Industrial Problems, pages 95–100, Athens, Greece, 2002.
[44] E. Zitzler and L. Thiele. Multiobjective evolutionary algorithms: a comparative case study and the
strength pareto approach. IEEE Transactions on Evolutionary Computation, 3(4):257–271, 1999.
[45] E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, and V.G. Da Fonseca. Performance assess-
ment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary
Computation, 7:117–132, 2003.
[46] Eckart Zitzler and Simon Künzli. Indicator-based selection in multiobjective search. In Xin Yao
et al., editors, Parallel Problem Solving from Nature (PPSN VIII), pages 832–842, Berlin, Germany,
2004. Springer-Verlag.