0% found this document useful (0 votes)
99 views

Jmetal 4.5 User Manual: Antonio J. Nebro, Juan J. Durillo

This document is a user manual for jMetal 4.5, an open-source Java-based multi-objective optimization framework. The manual provides an overview of jMetal's features and components, including solution encodings, operators, problems, and algorithms. It describes how to install and run jMetal in various development environments. The manual also demonstrates how to perform experiments using jMetal's experimentation tools and run algorithms in parallel. It includes examples and "how-to" sections for common tasks like using different solution representations and accessing non-dominated solutions.

Uploaded by

Franklin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Jmetal 4.5 User Manual: Antonio J. Nebro, Juan J. Durillo

This document is a user manual for jMetal 4.5, an open-source Java-based multi-objective optimization framework. The manual provides an overview of jMetal's features and components, including solution encodings, operators, problems, and algorithms. It describes how to install and run jMetal in various development environments. The manual also demonstrates how to perform experiments using jMetal's experimentation tools and run algorithms in parallel. It includes examples and "how-to" sections for common tasks like using different solution representations and accessing non-dominated solutions.

Uploaded by

Franklin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

jMetal 4.

5 User Manual

Antonio J. Nebro, Juan J. Durillo


January 21, 2014
Contents

Preface 1

1 Overview 3
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Design goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Summary of Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Reference papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Installation 7
2.1 Unpacking the sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Setting the environment variable CLASSPATH . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Compiling the sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Netbeans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Eclipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.2 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 IntelliJ Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.2 Configuring and executing an algorithm . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Using the jMetal.jar file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Architecture 13
3.1 Basic Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Encoding of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 jMetal Package Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Case Study: NSGA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Class NSGAII.java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 Class NSGAII main . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

iii
4 Experimentation with jMetal 31
4.1 The jmetal.experiments.Settings Class . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 An example of Setting class: NSGA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 The jmetal.experiments.Main class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Experimentation Example: NSGAIIStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.1 Defining the experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.2 Running the experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.3 Analyzing the output results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5 Experimentation example: StandardStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.6 Experiments when the Pareto fronts of the problems are unknown . . . . . . . . . . . . . 49
4.7 Using quality indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.8 Running experiments in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5 Parallel Algorithms 51
5.1 The IParallelEvaluator Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Evaluating Solutions In Parallel in NSGA-II: pNSGAII . . . . . . . . . . . . . . . . . . . . 52
5.3 About Parallel Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

6 How-to’s 57
6.1 How to use binary representations in jMetal . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.2 How to use permutation representations in jMetal . . . . . . . . . . . . . . . . . . . . . . 59
6.3 How to use the Mersenne Twister pseudorandom number generator? . . . . . . . . . . . . 60
6.4 How to create a new solution type having mixed variables? . . . . . . . . . . . . . . . . . 60
6.5 How to obtain the non-dominated solutions from a file? . . . . . . . . . . . . . . . . . . . 62
6.6 How to use the WFG Hypervolume algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.7 How to configure the algorithms from a configuration file? . . . . . . . . . . . . . . . . . . 64

7 What about’s 65
7.1 What about developing single-objective metaheuristics with jMetal? . . . . . . . . . . . . 65
7.2 What about optimized variables and solution types? . . . . . . . . . . . . . . . . . . . . . 65

8 Versions and Release Notes 69


8.1 Version 4.5 (21st January 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2 Version 4.4 (23rd July 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3 Version 4.3 (3rd January 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.4 Version 4.2 (14th November 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.5 Version 4.0 (10th November 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.6 Version 3.1 (1st October 2010) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.7 Version 3.0 (28th February 2010) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.8 Version 2.2 (28nd May 2009) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.9 Version 2.1 (23rd February 2009) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.10 Version 2.0 (23rd December 2008) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Bibliography 76

iv
List of Figures

2.1 jMetal source code directory structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8


2.2 Running NSGA-II using the jMetal.jar file. . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.1 jMetal class diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14


3.2 Elements describing solution representations into jMetal. . . . . . . . . . . . . . . . . . . . 14
3.3 jMetal packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 UML diagram of NSGAII. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.1 Output directories and files after running the experiment. . . . . . . . . . . . . . . . . . . 43


4.2 Boxplots of the values obtained after applying the hypervolume quality indicator (notch
= true). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

v
vi
List of Tables

4.1 EPSILON. Mean and standard deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43


4.2 EPSILON. Median and IQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 HV. Mean and standard deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 HV. Median and IQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.5 Average Rankings of the algorithms according to the Epsilon indicador . . . . . . . . . . . 44
4.6 ZDT1 .HV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.7 ZDT2 .HV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.8 ZDT3.HV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.9 ZDT4 .HV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.10 DTLZ1 .HV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.11 WFG2 .HV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.12 ZDT1 ZDT2 ZDT3 ZDT4 DTLZ1 WFG2 .HV. . . . . . . . . . . . . . . . . . . . . . . . . 46
4.13 LZ09 benchmark. Results of the Wilcoxon rank-sum test applied to the IHV values [28]. . 46

5.1 Solving ZDT1 with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2 Solving ZDT1b with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds). ZDT1b is a the problem as ZDT1 but including a idle loop in the evaluation
function to increase its computing time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

vii
viii
Preface

This document contains the manual of jMetal, a framework for multi-objective optimization with meta-
heuristics developed in the Computer Science Department of the University of Málaga.
The jMetal project began in 2006 with the idea of writing, from a former C++ package, a Java tool
to be used in our research in multi-objective optimization techniques with metaheuristic algorithms. We
decided to put the package publicly available in November 2006, and it was allocated at SourceForge
in November 2008 (http://jmetal.sourceforge.net). jMetal is an open source software, and it can
be downloaded from http://sourceforge.net/projects/jmetal; as of today, it has been downloaded
more than 9000 times.
Two versions of jMetal written in different languages are work in progress:
• jMetalCpp (http://jmetalcpp.sourceforge.net). This version is coded in C++ and it is avail-
able since February 2012. It implements about 70% of the Java implementation.
• jMetal.Net (http://jmetalnet.sourceforge.net/), which is implemented in C#. Several pre-
releases are available since June 2011. The last release covers about 10% of the original Java
version.
This manual covers the Java version and it is structured into eight chapters, covering issues such as
installation, architecture description, examples of use, parallelism, a how-to’s section, and a summary
of versions and release notes.

1
2
Chapter 1

Overview

jMetal stands for Metaheuristic Algorithms in Java, and it is an object-oriented Java-based framework
for multi-objective optimization with metaheuristic techniques. jMetal provides a rich set of classes
which can be used as the building blocks of multi-objective techniques; this way, by taking advantage
of code-reusing, the algorithms share the same base components, such as implementations of genetic
operators and density estimators, thus facilitating not only the development of new multi-objective
techniques but also to carry out different kinds of experiments. The inclusion of a number of classical
and state-of-the-art algorithms, many problems usually included in performance studies, and a set of
quality indicators allow not only newcomers to study the basic principles of multi-objective optimization
with metaheuristics but also their application to solve real-world problems.
The jMetal project is continuously evolving. As we are researchers, not a software company, new
versions are released when we require new features to be added into the software to carry out our research
activities.

1.1 Motivation
When we started to work in metaheuristics for multi-objective optimization in 2004, we did not find
any software package satisfying our needs. The implementation in C of NSGA-II, the most used multi-
objective algorithm, publicly available1 , was difficult to be used as the basis of new algorithms, in part
due to its lack of an object-oriented design. An interesting choice was PISA [2], a C-based framework
for multi-object optimization which is based on separating the algorithm specific part of an optimizer
from the application-specific part. This is carried out by using a shared-file mechanism to communicate
the module executing the application with the module running the metaheuristic. A drawback of PISA
is that their internal design hinders to reuse code. From our point of view (we are computer science
engineers), it became clear that it should be easier to develop our own tool starting from scratch that
working with existing software. The result is the Java-based framework jMetal.
When we started to use jMetal in our research, we decided to make it available to the community of
people interested in multi-objective optimization. It is licensed under the GNU Lesser General Public
License, and it can be obtained freely from http://jmetal.sourceforge.net. During the develop-
ment of jMetal, other Java-based software tools have been offered by other groups (e.g., EVA22 , ECJ3 ,
OPT4J4 ). All these toolboxes can be useful enough for many researchers but, while jMetal is specifically
oriented to multi-objective optimization with metaheuristics, most of existing frameworks are focused
mainly on evolutionary algorithms, and many of them are centered in single-objective optimization,
offering extensions to the multi-objective domain.
1 NSGA-II: http://www.iitk.ac.in/kangal/codes.shtml
2 EVA2: http://www.ra.cs.uni-tuebingen.de/software/EvA2/
3 ECJ: http://www.cs.gmu.edu/ eclab/projects/ecj/
4 OPT4J: http://opt4j.sourceforge.net/

3
4 CHAPTER 1. OVERVIEW

However, we have faced frequently the need of solving single-objective optimization problems, so
we used jMetal as well, given that developing single-objective metaheuristics from their multi-objective
counterparts is usually an easy task. Thus, jMetal provides currently many algorithms to solve problems
having a single objective.

1.2 Design goals


We imposed ourselves as design goals that jMetal should be simple and easy to use, portable (hence the
choice of Java), flexible, and extensible. We detail these goals next:
• Simplicity and easy-to-use. These are the key goals: if they are not fulfilled, few people will use
the software. The classes provided by jMetal follows the principle of that each component should
only do one thing, and do it well. Thus, the base classes (SolutionSet, Solution, Variable, etc.)
and their operations are intuitive and, as a consequence, easy to understand and use. Furthermore,
the framework includes the implementation of many metaheuristics, which can be used as templates
for developing new techniques.
• Flexibility. This is a generic goal. On the one hand, the software must incorporate a simple
mechanism to execute the algorithms under different parameter settings, including algorithm-
specific parameters as well as those related to the problem to solve. On the other hand, issues such
as choosing a real or binary-coded representation and, accordingly, the concrete operators to use,
should require minimum modifications in the programs.
• Portability. The framework and the algorithms developed with it should be executed in ma-
chines with different architectures and/or running distinct operating systems. The use of Java
as programming language allows to fulfill this goal; furthermore, the programs do not need to be
re-compiled to run in a different environment.
• Extensibility. New algorithms, operators, and problems should be easily added to the framework.
This goal is achieved by using some mechanisms of Java, such as inheritance and late binding. For
example, all the MOPs inherits from the class Problem, so a new problem can be created just by
writing the methods specified by that class; once the class defining the new problem is compiled,
nothing more has to be done: the late binding mechanism allows to load the code of the MOP only
when this is requested by an algorithm. This way, jMetal allows to separate the algorithm-specific
part from the application-specific part.

1.3 Summary of Features


A summary of jMetal main features is the following:
• Implementation of a number of classic and modern multi-objective optimization algorithms: NSGA-
II [6], SPEA2 [43], PAES [18], PESA-II [3], OMOPSO [33], MOCell [27], AbYSS [30], MOEA/D [22],
Densea [15], CellDE [11], GDE3 [19], FastPGA [13], IBEA [46], SMPSO [25], SMPSOhv [24],
MOCHC [23], SMS-EMOA [12], dMOPSO [39], adaptive and random NSGA-II [28].
• Parallel (multithreaded) versions of MOEA/D, NSGA-II and SMPSO (referred as to pMOEAD,
pNSGAII, and pSMPSO, respectively).
• A rich set of test problems including:
– Problem families: Zitzler-Deb-Thiele (ZDT) [42], Deb-Thiele-Laumanns-Zitzler (DTLZ) [5],
Walking-Fish-Group (WFG) test problems [16]), CEC2009 (unconstrained problems) [40],
and the Li-Zhang benchmark [22].
– Classical problems: Kursawe [21], Fonseca [14], Schaffer [34].
1.4. REFERENCE PAPERS 5

– Constrained problems: Srinivas[35], Tanaka [36], Osyczka2 [31], Constr Ex [6], Golinski [20],
Water [32].
– Combinatorial problems: multi objective traveling salesman problem (mTSP), multi objective
quadratic assignment problem (mQAP).
• Single-objective metaheuristics: GAs (generational, steady-state, cellular), PSO, DE, CMA-ES,
(µ + λ) and (µ, λ) ESs.
• Implementation of a number of widely used quality indicators: Hypervolume [44], Spread [6],
Generational Distance [37], Inverted Generational Distance [37], Epsilon [17].
• Different variable representations: binary, real, binary-coded real, integer, permutation.
• Validation of the implementation: we compared our implementations of NSGA-II and SPEA2 with
the original versions, achieving competitive results [9].
• Support for performing experimental studies, including the automatic generation of
– LATEX tables with the results after applying quality indicators,
– LATEX tables summarizing statistical pairwise comparisons by using the Wilcoxon test to the
obtained results, and
– R (http://www.r-project.org/) boxplots summarizing those results.
In addition, jMetal includes the possibility of using several threads for performing these kinds of
experiments in such a way that several independent runs can be executed in parallel by using
modern multi-core CPUs.
• A Web site (http://jmetal.sourceforge.net) containing the source codes, the user manual and,
among other information, the Pareto fronts of the included MOPs, references to the implemented
algorithms, and references to papers using jMetal.

1.4 Reference papers


If you want to cite jMetal, please use these references [10][7]:

@article{DN11,
author = "J. J. Durillo and A. J. Nebro",
title = "{jMetal}: A Java framework for multi-objective optimization",
journal = "Advances in Engineering Software",
volume = "42",
number = "10",
pages = " 760-771 ",
year = "2011",
note = "",
issn = "0965-9978",
doi = "DOI: 10.1016/j.advengsoft.2011.05.014",
url = "http://www.sciencedirect.com/science/article/pii/S0965997811001219",
}

@inproceedings{DNA10,
Address = {Barcelona, Spain},
Author = { J.J. Durillo and A.J. Nebro and E. Alba },
Booktitle = {CEC 2010},
Month = {July},
6 CHAPTER 1. OVERVIEW

Pages = {4138-4325},
OPTPublisher = {Springer Berlin / Heidelberg},
OPTSeries = {Lecture Notes in Computer Science},
Title = {The {jMetal} Framework for Multi-Objective Optimization: Design and Architecture},
OPTVolume = {5467},
Year = {2010}}

1.5 License
jMetal is licensed under the Creative Commons GNU Lesser General Public License License5

5 http://creativecommons.org/licenses/LGPL/2.1/
Chapter 2

Installation

jMetal is written in Java, not requiring any other additional software. The requirement is to use Java JDK
1.5 or newer. The source code is bundled in a tar.gz package which can be download from SourceForge1 .
A jar file is also available if you are only interested in running the provided algorithms. The jMetal Web
page at SourceForge is: http://jmetal.sourceforge.net.
There exist several ways to work with Java programs; we briefly describe here how to compile and
run algorithms developed with jMetal by using the command line in a text terminal and the Integrated
Development Environments (IDEs) Netbeans2 , Eclipse3 , and IntelliJ Idea4 . There are several ways to
create a project from existing sources with these IDEs; we merely show one way to do it.
In this chapter, we also detail how to execute the algorithms from the jar file, without having to
work with the source code.

2.1 Unpacking the sources


Independently of your favorite way of working with Java, you have to decompress the tar.gz package
and untar the resulting tarball. Using the command line, this can be done by typing:

gzip -d jmetal.tar.gz
tar xf jmetal.tar

Alternatively, you can type:

tar zxf jmetal.tar.gz

Let us name the directory where the tarball is decompressed as JMETALHOME. As a result, you will
get into it the source code of jMetal, which has the structure depicted in Figure 2.1.

2.2 Command line


If you intend to use jMetal from a text based terminal, please follow the following steps. We assume
that you are using a bash shell in a Unix-like environment (e.g, Linux, MacOS X, or Cywgin under
Windows).
1 http://sourceforge.net/projects/jmetal
2 http://www.netbeans.org/
3 http://www.eclipse.org/
4 https://www.jetbrains.com/idea/

7
8 CHAPTER 2. INSTALLATION

Figure 2.1: jMetal source code directory structure.

2.2.1 Setting the environment variable CLASSPATH


To add directory JMETALHOME to the environment variable CLASSPATH, type:

export CLASSPATH=$CLASSPATH:$JMETALHOME

2.2.2 Compiling the sources


Move to directory JMETALHOME and compile the sources. There are several ways to do that; we detail
one of them:
STEP 1. Compile the problems

javac jmetal/problems/*.java
javac jmetal/problems/ZDT/*.java
javac jmetal/problems/DTLZ/*.java
javac jmetal/problems/WFG/*.java

STEP 2. Compile the algorithms

javac jmetal/metaheuristics/nsgaII/*.java
javac jmetal/metaheuristics/paes/*.java
javac jmetal/metaheuristics/spea2/*.java
javac jmetal/metaheuristics/mopso/*.java
javac jmetal/metaheuristics/mocell/*.java
javac jmetal/metaheuristics/abyss/*.java

Of course, you do not need to compile all of them; choose only those you are interested in.

2.2.3 Configuring and executing an algorithm


Let us suppose that we intend to use NSGA-II to solve a multi-objective optimization problem. There
are several ways to accomplish this:
2.3. NETBEANS 9

1. Configuring the algorithm by editing the NSGA main.java program (see Section 3.3).

2. By using the jmetal.experiments package (see Chapter 4).

Here, we briefly describe the first option, consisting in editing file NSGAII main.java belonging to
the package jmetal/metaheuristics/nsgaII, recompiling, and executing it:

javac jmetal/metaheuristics/nsgaII/*.java
java jmetal.metaheuristics.nsgaII.NSGAII_main

As result, you will obtain to files: VAR, containing the values of the variables of the approximation
set obtained, and FUN, which stores the corresponding values of the objective functions. Needless to
say that you can change the names of these files by editing NSGAII main.java.

2.3 Netbeans
We describe here how to compile and use jMetal with NetBbeans 7.0.1.

2.3.1 Creating the project


1. Select File → New Project.

2. Choose Java Project with Existing Sources from the General category, and click the Next button.

3. Write a project name (e.g. jMetal) and choose the directory (folder) where you want to deploy the
project. Check Set as Main Project and Click Next.

4. Click Add Folder to add the JMETALHOME directory to the source package folders. Click Finish

2.3.2 Configuring and executing an algorithm


We use as example the metaheuristic NSGA-II. To configure the algorithm, click in the Files tab in
the IDE, and open the file jMetal Source Packages → jmetal → metaheuristics → nsgaII → NS-
GAII main.java. Put the mouse pointer on the file name in the file tree to run the algorithm, click
on the right button and choose Run File.
As a result, you obtain two files containing the Pareto optimal solutions and the Pareto front found
by the metaheuristic. By default, these files are named VAR and FUN, respectively. They are located
in the project folder.

2.4 Eclipse
We describe next how to compile and use jMetal using Eclipse Kepler.

2.4.1 Creating the project


1. Select File → New → Java Project.

2. Write a project name (e.g., jMetal) and click on the Next button.

3. Select Link additional source and browse to set JMETALHOME as linked folder location.

4. Click Finish.
10 CHAPTER 2. INSTALLATION

2.4.2 Configuring and executing an algorithm


We use again NSGA-II as an example. To configure the algorithm, open the file NSGAII main.java
selecting it from the package jmetal.metaheuristics.nsgaII and modify the file accordingly to your
preferences.
To run the algorithm, right click on NSGAII main.java in the project tree or in blank part of the
windows containing the file. Select Run as → Java Application. As a result, you obtain two files
containing the Pareto optimal solutions and the Pareto front found by the algorithm. By default,
these files are named VAR and FUN, respectively. They are located in the directory where the project
workspace is stored (e.g., $HOME/Documents/workspace/jmetal in a Unix-based machine).

2.5 IntelliJ Idea


This section describes how to use create a project and run jMetal with IntelliJ Idea CE 12.

2.5.1 Creating the project


The first step is to create a subdirectory called src in the JMETALHOME directory and move the jMetal
source tree into it. Then, run IntelliJ Idea, chose Create New Project in the Quick start panel and
follow these steps:

1. Select Java Module.


2. Write a project name (e.g., jMetal) and choose the JMETALHOME as project location and click on
the Next button. In the next panel Click Finish.

3. Compile the project by selecting Build → Make project.

2.5.2 Configuring and executing an algorithm


We use NSGA-II as an example once more. To configure the algorithm, open the file NSGAII main.java
selecting it from the package jmetal.metaheuristics.nsgaII and modify the file accordingly to your
preferences.
To run the algorithm, right click on NSGAII main.java in the project tree or in blank part of the
windows containing the file. Select Run ”NSGAII main.main()”. As a result, you obtain two files
containing the Pareto optimal solutions and the Pareto front found by the algorithm. By default, these
files are named VAR and FUN, respectively. They are located JMETALHOME directory.

2.6 Using the jMetal.jar file


The jMetal.jar file contains the byte codes of all the jMetal classes; this way, you can use it as a library
and the algorithms can be used without having to deal with modifying nor compiling the source code.
If you are comfortable working with Java, using a jar file is pretty easy: just include it in the
CLASSPATH environment variable and then you can invoque the metaheuristics from the command line.
Figure 2.2 contains a screen capture showing how execute NSGA-II using two of the alternatives provided
in jMetal to run a metaheuristic.
2.6. USING THE JMETAL.JAR FILE 11

Figure 2.2: Running NSGA-II using the jMetal.jar file.


12 CHAPTER 2. INSTALLATION
Chapter 3

Architecture

We use the Unified Modelling Language (UML) to describe the architecture and components of jMetal.
A UML class diagram representing the main components and their relationships is depicted in Figure 3.1.
The diagram is a simplified version in order to make it understandable. The basic architecture of
jMetal relies in that an Algorithm solves a Problem using one (and possibly more) SolutionSet and a
set of Operator objects. We have used a generic terminology to name the classes in order to make them
general enough to be used in any metaheuristic. In the context of evolutionary algorithms, populations
and individuals correspond to SolutionSet and Solution jMetal objects, respectively; the same can be
applied to particle swarm optimization algorithms concerning the concepts of swarm and particles.

3.1 Basic Components


In this section we describe the approaches taken in jMetal to implement solution encodings, operators,
problems, and algorithms.

3.1.1 Encoding of Solutions


One of the first decisions that have to be taken when using metaheuristics is to define how to encode
or represent the tentative solutions of the problem to solve. Representation strongly depends on the
problem and determines the operations (e.g., recombination with other solutions, local search procedures,
etc.) that can be applied. Thus, selecting a specific representation has a great impact on the behavior
of metaheuristics and, hence, in the obtained results.
Fig. 3.2 depicts the basic components that are used for representing solutions into the framework. A
Solution is composed of set of Variable objects, which can be of different types (binary, real, binary-
coded real, integer, permutation, etc) plus an array to store the fitness values. With the idea of providing
a flexible and extensible scheme, each Solution has associated a type (the SolutionType class in the
figure). The solution type allows to define the variable types of the Solution and creating them, by
using the createVariables() method. This is illustrated in Listing 3.1 which shows the code of the
RealSolutionType class, used to characterize solutions composed only by real variables. jMetal provides
similar solutions types to represent integer, binary, permutation, and other representations, as can be
seen in Fig. 3.2.
The interesting point of using solution types is that it is very simple to define more complex rep-
resentations, mixing different variable types. For example, if we need a new solution representation
consisting in a real, an integer, and a permutation of integers, a new class extending SolutionType can
be defined for representing the new type, where basically only the createVariables() method should
be redefined. Listing 3.2 shows the code required for this new type of solution. This is explained in more
detail in Section 6.4.

13
14 CHAPTER 3. ARCHITECTURE

BinaryTournament PolynomialMutation SinglePointCrossover SBXCrossover

NSGAII

Selection Mutation Crossover LocalSearch


SPEA2

PAES
Operator
# parameters_ : Map uses
+setParameter(name: String, value: Object): void
+getParameter(name:String): Object * AbYSS
+execute(object: Object):Object

Algorithm
# inputParameters_ : Map MOEAD
SolutionSet #outputParameters_ : Map
+add(): void +execute()
1..*
+remove(): void manages +addOperator(name: String, operator:Operator): void
+size(): int +getOperator(name: String): Operator
+replace(): void IBEA
+setInputParameter(name: String, object: Object): void
+getInputParameter(): Object
+setOutputParameter(): void
* +getOutputParameter(name: String, object: Object): Object
Solution
has +getProblem(): Problem MOCell
-fitness: double[]
SolutionType
- size: int solves
+createVariables():Variable[]
SMPSO
1..*
determines Problem
Variable defines #numberOfVariables_ : integer
#numberOfObjectives_ : integer
#numberOfConstraints_ : integer
+evaluate(solution: Solution): void
+evaluateConstraints(solution: Solution): void

BinaryReal
-bits: BitSet Binary Real Schaffer Kursawe ZDT1 DTLZ2 WFG3
-value: double -bits: BitSet -value: double

Figure 3.1: jMetal class diagram.

BinaryReal

Real Permutation

Int BinaryReal

Variable

consist of
1..*
Solution
-fitness: double[]

has

SolutionType
- size: int
+createVariables()

BinaryRealSolutionType IntSolutionType

BinarySolutionType IntRealSolutionType

PermutationSolutionType RealSolutonType

Figure 3.2: Elements describing solution representations into jMetal.


3.1. BASIC COMPONENTS 15

1 // RealSolutionType . java
2
3 package j m e t a l . e n c o d i n g s . s o l u t i o n T y p e ;
4
5 im po rt jmetal . c o r e . Problem ;
6 im po rt jmetal . core . SolutionType ;
7 im po rt jmetal . core . Variable ;
8 im po rt jmetal . e n c o d i n g s . v a r i a b l e . Real ;
9
10 /∗ ∗
11 ∗ C l a s s r e p r e s e n t i n g a s o l u t i o n t y p e composed o f r e a l v a r i a b l e s
12 ∗/
13 p ub l ic c l a s s RealSolutionType extends SolutionType {
14
15 /∗ ∗
16 ∗ Constructor
17 ∗ @param problem
18 ∗ @throws ClassNotFoundException
19 ∗/
20 p u b l i c R e a l S o l u t i o n T y p e ( Problem problem ) throws ClassNotFoundException {
21 s u p e r ( problem ) ;
22 } // C o n s t r u c t o r
23
24 /∗ ∗
25 ∗ Creates the v a r i a b l e s of the s o l u t i o n
26 ∗ @param d e c i s i o n V a r i a b l e s
27 ∗/
28 public Variable [ ] createVariables () {
29 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ p r o b l e m . ge tN um b er Of Va ri ab le s ( ) ] ;
30
31 f o r ( i n t v a r = 0 ; v a r < p r o b l e m . ge tN um be rO fV a ri ab le s ( ) ; v a r++)
32 v a r i a b l e s [ v a r ] = new Real ( p r o b l e m . g e t L o w e r L i m i t ( v a r ) ,
33 problem . getUpperLimit ( var ) ) ;
34
35 return variables ;
36 } // c r e a t e V a r i a b l e s
37 } // R e a l S o l u t i o n T y p e

Listing 3.1: RealSolutionType class, which represents solutions composed of real variables

1 public Variable [ ] createVariables () {


2 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ 3 ] ;
3
4 v a r i a b l e s [ 0 ] = new Real ( ) ;
5 v a r i a b l e s [ 1 ] = new I n t ( ) ;
6 v a r i a b l e s [ 2 ] = new Permutation ( ) ;
7
8 return variables ;
9 } // c r e a t e V a r i a b l e

Listing 3.2: Code of the createVariables() method for creating solutions consisting on a Real, an
Integer, and a Permutation
16 CHAPTER 3. ARCHITECTURE

1 ...
2 // Step 1 : c r e a t i n g a map f o r t h e o p e r a t o r p a r a m e t e r s
3 HashMap p a r a m e t e r s = new HashMap ( ) ;
4
5 // Step 2 : c o n f i g u r e t h e o p e r a t o r
6 p a r a m e t e r s . put ( ‘ ‘ p r o b a b i l i t y ’ ’ , 0 . 9 ) ;
7 p a r a m e t e r s . put ( ‘ ‘ d i s t r i b u t i o n I n d e x ’ ’ , 2 0 . 0 ) ;
8
9 // Step 3 : c r e a t e t h e o p e r a t o r
10 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” SBXCrossover ” , p a r a m e t e r s ) ;
11
12 // Step 4 : add t h e o p e r a t o r t o an a l g o r i t h m
13 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
14 ...

Listing 3.3: Configuring a SBX crossover operator.

Once we have the means to define or using existing solution representations, we can create solutions
that can be grouped into SolutionSet objects (i.e., populations or swarms).

3.1.2 Operators
Metaheuristic techniques are based on modifying or generating new solutions from existing ones by
means of the application of different operators. For example, EAs make use of crossover, mutation, and
selection operators for modifying solutions. In jMetal, any operation altering or generating solutions (or
sets of them) inherits from the Operator class, as can be seen in Fig. 3.1.
The framework already incorporates a number of operators, which can be classified into four different
classes:

• Crossover. Represents the recombination or crossover operators used in EAs. Some of the included
operators are the simulated binary (SBX) crossover [4] and the two-points crossover for real and
binary encodings, respectively.

• Mutation. Represents the mutation operator used in EAs. Examples of included operators are
polynomial mutation [4] (real encoding) and bit-flip mutation (binary encoding).

• Selection. This kind of operator is used for performing the selection procedures in many EAs. An
example of selection operator is the binary tournament.

• LocalSearch. This class is intended for representing local search procedures. It contains an extra
method for consulting how many evaluations have been performed after been applied.

Each operator contains the setParameter() and getParameter() methods, which are used for
adding and accessing to an specific parameter of the operator. For example, the SBX crossover requires
two parameters, a crossover probability (as most crossover operators) plus a value for the distribution
index (specific of the operator), while a single point mutation operator only requires the mutation
probability. The operators can also receive their parameters by passing them as an argument when the
operator object is created.
It is worth noting that when an operator is applied on a given solution, the solution type of this one
is known. Thus, we can define, for example, a unique two-points crossover operator that can be applied
to binary and real solutions, using the solution type to select the appropriate code in each case.
To illustrate how operators are used and implemented in jMetal, let take as an example the SBX
crossover operator. This has to parameters: the crossover probability and the distribution index. The
way of creating and setting the operator is depicted in Listing 3.3. First, a Java HashMap, a map having
3.1. BASIC COMPONENTS 17

1 ...
2 p u b l i c c l a s s MOMetaheuristic e x t e n d s Algorithm {
3 ...
4 Operator c r o s s o v e r ;
5 crossover = operators . get ( ” crossover ” ) ;
6 ...
7 S o l u t i o n [ ] p a r e n t s = new S o l u t i o n [ 2 ] ;
8 parents [ 0 ] = ( Solution ) selectionOperator . execute ( population ) ;
9 parents [ 1 ] = ( Solution ) selectionOperator . execute ( population ) ;
10 Solution [ ] offSpring = ( Solution [ ] ) crossoverOperator . execute ( parents ) ;
11 ...
12 } // MOMetaheuristic

Listing 3.4: Using a SBX crossover operator inside an algorithm.

pairs (name, object), is created to store the operator parameters (line 3), which are set in lines 6-7;
second, the operator is created (line 10) and, finally, it is added to an algorithm in line 13.
To make use of the operator inside a given algorithm, the following steps have to be carried out (see
Listing 3.4). First, the algorithm must get the previously created operator (line 5), which is already to be
used; second, the operator can be executed after invoking its execute() method with the corresponding
parameters. In the case of the SBX crossover the parameters are two solutions previously obtained,
typically after applying a selection operator (lines 8-9). Let us remark here that whenever a crossover
operator is applied to a pair of solutions and the result is a pair another pair of solutions, the code in
Listing 3.4 can remain as is, there is no need of made any modifications.
The implementation of the SBX crossover operator in jMetal is included in class SBXCrossover (see
Listing 3.5). We can see that this class extends the jmetal.operator.crossover class (lines 3 and 5)
and that the two parameters characterizing the operator(crossover probability and distribution index)
are declared in lines 8 and 9. Let us pay attention now to lines 14-15. An operator can be applied to a
given set of encodings, so the adopted approach is to indicate in a list the valid solution types. In the
case of the SBX crossover, the operator is intended to Real and ArrayReal solution types, so they are
included in the list called VALID TYPES. Later, this list is used in the execute() method to check that
the solutions to be combined have the correct representation.
The constructor (lines 19-26) merely gets the map received as argument and checks whether some of
the parameters have to be set.
The execute() method receives as a parameter a generic Java Object (line 37), which must represent
an array of two solutions, the parent solutions (line 38). We can see in lines 46-51 how the VALID TYPES
is used to check that the parent solutions have valid encodings. Finally, the method calls a function do
Crossover() (line 54) which actually performs the crossover and returns an array with the two new
generated solutions, with are the return object of the method (line 56).

3.1.3 Problems
In jMetal, all the problems inherits from class Problem. This class contains two basic methods: evaluate()
and evaluateConstraints(). Both methods receive a Solution representing a candidate solution to
the problem; the first one evaluates it, and the second one determines the overall constraint violation of
this solution. All the problems have to define the evaluate() method, while only problems having side
constraints need to define evaluateConstraints(). The constraint handling mechanism implemented
by default is the one proposed in [6].
A key design feature in jMetal is that the problem defines the allowed solutions types that are
suitable to solve it. Listing 3.6 shows the code used for implementing Kursawe’s problem (irrelevant
code is omitted). As we can observe observe, it extends class Problem (line 5). After that, a constructor
method is defined for creating instances of this problem (lines 9-28), which has two parameters: a string
containing a solution type identifier and the number of decision variables of the problem. As a general
18 CHAPTER 3. ARCHITECTURE

1 // SBXCrossover . j a v a
2 ...
3 package j m e t a l . o p e r a t o r s . c r o s s o v e r ;
4 ...
5 p u b l i c c l a s s SBXCrossover e x t e n d s C r o s s o v e r {
6 ...
7 p u b l i c s t a t i c f i n a l d o u b l e ETA C DEFAULT = 2 0 . 0 ;
8 p r i v a t e Double c r o s s o v e r P r o b a b i l i t y = n u l l ;
9 p r i v a t e d o u b l e d i s t r i b u t i o n I n d e x = ETA C DEFAULT ;
10
11 /∗ ∗
12 ∗ Valid s o l u t i o n types to apply t h i s o p er at o r
13 ∗/
14 p r i v a t e s t a t i c L i s t VALID TYPES = Arrays . a s L i s t ( R e a l S o l u t i o n T y p e . c l a s s ,
15 ArrayRealSolutionType . c l a s s ) ;
16 /∗ ∗
17 ∗ Constructor
18 ∗/
19 p u b l i c SBXCrossover ( HashMap<S t r i n g , Object> p a r a m e t e r s ) {
20 super ( parameters ) ;
21
22 i f ( p a r a m e t e r s . g e t ( ” p r o b a b i l i t y ” ) != n u l l )
23 c r o s s o v e r P r o b a b i l i t y = ( Double ) p a r a m e t e r s . g e t ( ” p r o b a b i l i t y ” ) ;
24 i f ( p a r a m e t e r s . g e t ( ” d i s t r i b u t i o n I n d e x ” ) != n u l l )
25 distributionIndex = ( Double ) p a r a m e t e r s . g e t ( ” d i s t r i b u t i o n I n d e x ” ) ;
26 } // SBXCrossover
27
28 p u b l i c So lut ion [ ] doCrossover ( double p r o b a b i l i t y ,
29 S o l u t i o n parent1 ,
30 S o l u t i o n p a r e n t 2 ) throws JMException {
31 ...
32 }
33
34 /∗ ∗
35 ∗ Executes the operation
36 ∗/
37 p u b l i c O b j e c t e x e c u t e ( O b j e c t o b j e c t ) throws JMException {
38 Solution [ ] parents = ( Solution [ ] ) object ;
39
40 i f ( p a r e n t s . l e n g t h != 2 ) {
41 C o n f i g u r a t i o n . l o g g e r . s e v e r e ( ” SBXCrossover . e x e c u t e : o p e r a t o r n e e d s two ” +
42 ” parents ”) ;
43 ...
44 } // i f
45
46 if( ! ( VALID TYPES . c o n t a i n s ( p a r e n t s [ 0 ] . getType ( ) . g e t C l a s s ( ) ) &&
47 VALID TYPES . c o n t a i n s ( p a r e n t s [ 1 ] . getType ( ) . g e t C l a s s ( ) ) ) ) {
48 C o n f i g u r a t i o n . l o g g e r . s e v e r e ( ” SBXCrossover . e x e c u t e : t h e s o l u t i o n s ” +
49 ” t y p e ” + p a r e n t s [ 0 ] . getType ( ) + ” i s not a l l o w e d with t h i s o p e r a t o r ” ) ;
50 ...
51 } // i f
52
53 Solution [ ] offSpring ;
54 o f f S p r i n g = doCrossover ( c r o s s o v e r P r o b a b i l i t y , parents [ 0 ] , parents [ 1 ] ) ;
55
56 return offSpring ;
57 } // e x e c u t e
58 } // SBXCrossover

Listing 3.5: Implementation of the SBXCrossover operator.


3.2. JMETAL PACKAGE STRUCTURE 19

jmetal

core experiments problems

encodings metaheuristics operators

util qualityIndicators

Figure 3.3: jMetal packages.

rule, all the problems should have as first parameter the string indicating the solution type. The basic
features of the problem (number of variables, number of objectives, and number of constraints) are
defined in lines 10-12. The limits of the values of the decision variables are set in lines 15-21. The
sentences between lines 23-27 are used to specify that the allowed solution representations are binary-
coded real and real, so the corresponding SolutionType objects are created and assigned to a state
variable.
After the constructor, the evaluate() method is redefined (lines 33-43); in this method, after com-
puting the two objective function values, they are stored into the solution by using the setObjective
method of Solution (lines 41 and 42).
Many of commonly used benchmark problems are already included in jMetal. Examples are the ones
proposed by Zitzler-Deb-Thiele (ZDT) [42], Deb-Thiele-Laumanns-Zitzler (DTLZ) [5], Walking-Fish-
Group (WFG) test problems [16]), and the Li-Zhang benchmark [22].

3.1.4 Algorithms
The last core class in the UML diagram in Fig. 3.1 to comment is Algorithm, an abstract class which
must be inherited by the metaheuristics included in the framework. In particular, the abstract method
execute() must be implemented; this method is intended to run the algorithm, and it returns as a
result a SolutionSet.
An instance object of Algorithm may require some application-specific parameters, that can be added
and accessed by using the methods addParameter() and getParameter(), respectively. Similarly, an
algorithm may also make use of some operators, so methods for incorporating operators (addOperator())
and to get them (getOperator()) are provided. A detailed example of algorithm can be found in
Section 3.3, where the implementation of NSGA-II is explained.
Besides NSGA-II, jMetal includes the implementation of a number of both classic and modern
multi-objective optimizers; some examples are: SPEA2 [43], PAES [18], OMOPSO [33], MOCell [26],
AbYSS [30], MOEA/D [22], GDE3 [19], IBEA [46], or SMPSO [25].

3.2 jMetal Package Structure


jMetal is composed of six packages, which are depicted in Figure 3.3. The packages are core, problems,
metaheuristics, operators, encodings, qualityIndicators, util, and experiments . We briefly
describe them next:
20 CHAPTER 3. ARCHITECTURE

1 // Kursawe . j a v a
2 ...
3 package j m e t a l . p r o b l e m s ;
4 ...
5 p u b l i c c l a s s Kursawe e x t e n d s Problem {
6 /∗ ∗
7 ∗ Constructor .
8 ∗/
9 p u b l i c Kursawe ( S t r i n g s o l u t i o n T y p e , I n t e g e r n u m b e r O f V a r i a b l e s ) throws
ClassNotFoundException {
10 numberOfVariables = numberOfVariables . intValue ( ) ;
11 numberOfObjectives = 2 ;
12 numberOfConstraints = 0 ;
13 problemName = ” Kursawe ” ;
14
15 upperLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
16 lowerLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
17
18 f o r ( i n t i = 0 ; i < n u m b e r O f V a r i a b l e s ; i ++) {
19 l o w e r L i m i t [ i ] = −5.0 ;
20 upperLimit [ i ] = 5.0 ;
21 } // f o r
22
23 i f ( s o l u t i o n T y p e . compareTo ( ” B i n a r y R e a l ” ) == 0 )
24 s o l u t i o n T y p e = new B i n a r y R e a l S o l u t i o n T y p e ( t h i s ) ;
25 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” Real ” ) == 0 )
26 s o l u t i o n T y p e = new R e a l S o l u t i o n T y p e ( t h i s ) ;
27 }
28 } // Kursawe
29
30 /∗ ∗
31 ∗ Evaluates a s o l u t i o n
32 ∗/
33 p u b l i c v o i d e v a l u a t e ( S o l u t i o n s o l u t i o n ) throws JMException {
34 d o u b l e [ ] f x = new d o u b l e [ 2 ] ; // f u n c t i o n v a l u e s
35
36 f x [ 0 ] = // f 1 v a l u e
37 ...
38 f x [ 1 ] = // f 2 v a l u e
39 ...
40
41 solution . setObjective (0 , fx [ 0 ] ) ;
42 solution . setObjective (1 , fx [ 1 ] ) ;
43 } // e v a l u a t e
44 } // Kursawe

Listing 3.6: Code of the class implementing problem Kursawe


3.2. JMETAL PACKAGE STRUCTURE 21

• Package core: This package contains the basic ingredients to be used by the metaheuristics devel-
oped under jMetal. The main classes in this package have been commented in Section 3.1, which
are: Algorithm, Operator, Problem, Variable, Solution, SolutionSet, and SolutionType.
This package was named jmetal.base in versions prior to jMetal 4.0.

• Package problems: All the problems available in jMetal are included in this package. Here we can
find well-known benchmarks (ZDT, DTLZ, and WFG) plus other more recent problem families
(LZ07, CEC2009Competition). Furthermore, we can find many other problems (Fonseca, Kursawe,
Schaffer, OKA2, etc.).

• Package metaheuristics: This package contains the metaheuristics implemented in jMetal. The
list of techniques include NSGA-II, SPEA2, PAES, PESA-II, GDE3, FastPGA, MOCell, AbYSS,
OMOPSO, IBEA, and MOEA/D. Although jMetal is aimed at multi-objective optimization, a
number of single objective algorithms are included in the jmetal.metaheuristics.singleObjective
package.

• Package jmetal.operators: This package contains different kinds of operator objects, including
crossover, mutation, selection, and local search operators. We give next an example of an operator
of each type:

– jmetal.operators.crossover.SBXCrossover: This comparator takes also two solutions S1


and S2 and performs a simulated binary (SBX) crossover, returning as a result the two
obtained offsprings.
– jmetal.operators.mutation.Polynomial: Mutation operators typically are applied to sin-
gle solutions, modifying them accordingly, and they return the mutated solution. In this case,
the operator is a polynomial mutation.
– jmetal.operators.selection.BinaryTournament: Selection comparators usually take as a
parameter a solution set, returning a solution according to a criterium. In particular, this
operator applies a binary tournament.
– jmetal.operators.localSearch.MutationLocalSearch: These operators are intended to
apply local search strategies to a given solution. The MutationLocalSearch, used in the
AbYSS algorithm [30], requires as parameters a solution, a mutation operator, an integer
N , and a jmetal.base.archive object; then the mutation operator is applied iteratively
to improve the solution during N rounds and the archive is used to store the found non-
dominated solutions.

• Package jmetal.encodings: This package contains the basic variable representations and the
solution types included in the framework. In jMetal version 4.0 the following clases are included:

– Variables: Binary, BinaryReal, BinaryReal (binary coded real), Int, Permutation, ArrayInt,
and ArrayReal.
– Solution types: BinarySolutionType, RealSolutionType, BinaryRealRealSolutionType,
IntSolutionType, PermutationSolutionType, ArrayRealSolutionType, ArrayIntSolutionType,
IntRealSolutionType (combines Int and Real variables), and ArrayRealAndBinarySolutionType
(combines ArrayReal and Binary variables).

• Package util: A number of utilities classes are included in this class, as a pseudorandom number
generator, different types of archive, a neighborhood class to be used in cellular evolutionary
algorithms, comparators, etc.

• Package qualityIndicator: To assess the performance of multi-objective metaheuristics, a num-


ber of quality indicators can be applied. The package contains currently six indicators:

– Generational distance [37]


22 CHAPTER 3. ARCHITECTURE

Algorithm
# inputParameters_ : Map
#outputParameters_ : Map
+execute()
+addOperator(name: String, operator:Operator): void
+getOperator(name: String): Operator
+setInputParameter(name: String, object: Object): void
+getInputParameter(): Object
+setOutputParameter(): void
+getOutputParameter(name: String, object: Object): Object
+getProblem(): Problem

Problem

public void evaluate(Solution solution)

solves
NSGAII
private Problem problem_;
public NSGAII(Problem problem)
public SolutionSet execute()

Figure 3.4: UML diagram of NSGAII.

– Inverted generational distance [37]


– Additive epsilon [45]
– Spread [6]
– Generalized spread [41]
– Hypervolume [44]

• Package experiments: This package contains a set of classes intended to carry out typical studies
in multi-objective optimization. It is described in Chapter 4.

3.3 Case Study: NSGA-II


In this section, we describe the implementation of NSGA-II in jMetal. Under jMetal, a metaheuristic
is composed of a class defining the algorithm itself and another class to execute it. This second class
is used to specify the problem to solve, the operators to apply, the parameters of the algorithm, and
whatever other parameters need to be set (since jMetal 2.0, we have introduced an alternative way, by
using the package jmetal.experiments, as explained in Chapter 4). Let us call this two classes NSGAII
and NGAII main, respectively.

3.3.1 Class NSGAII.java


The UML diagram of the NSGAII class is depicted in Figure 3.4. As every metaheuristic developed
in jMetal, NSGAII inherits from Algorithm. This class has an abstract method, execute(), that is
called to run the algorithm and returns as a result a SolutionSet (typically, a population or archive
containing the obtained approximation set). We can see that new operators can be added to an algo-
rithm with the method addOperation(); these operations are accessed in the algorithm by invoking
getOperation(). Similarly, we can pass parameters to an algorithm (methods setInputParameter()
and getInputParameter), and an algorithm can return output results via setOutputParemeters() and
getOutputParameters. NSGAII has a constructor which receives the problem to solve as a parameter,
3.3. CASE STUDY: NSGA-II 23

1 // NSGAII . j a v a
2
3 package j m e t a l . m e t a h e u r i s t i c s . n s g a I I ;
4
5 im po rt j m e t a l . c o r e . ∗ ;
6
7 /∗ ∗
8 ∗ This c l a s s implements t h e NSGA−I I a l g o r i t h m .
9 ∗/
10 p u b l i c c l a s s NSGAII e x t e n d s Algorithm {
11
12 /∗ ∗
13 ∗ Constructor
14 ∗ @param problem Problem t o s o l v e
15 ∗/
16 p u b l i c NSGAII ( Problem problem ) {
17 s u p e r ( problem ) ;
18 } // NSGAII
19
20 /∗ ∗
21 ∗ Runs t h e NSGA−I I a l g o r i t h m .
22 ∗/
23 public SolutionSet execute () {
24 } // e x e c u t e
25 } // NSGA−I I

Listing 3.7: Scheme of the implementation of class NSGAII

as well as the implementation on execute(). Next, we analyze the implementation of the NSGAII class
in jMetal (file jmetal/metaheuristics/nsgaII/NSGAII.java). The basic code structure implementing
the class is presented in listing 3.7.
Let us focus on the method execute() (see Listing 3.8). First, we comment the objects needed
to implement the algorithm. The parameters to store the population size and the maximum number
of evaluations are declared in lines 2-3. The next variable, evaluations, is a counter of the number
of computed evaluations. The objects declared in lines 6-7 are needed to illustrate the use of quality
indicators inside the algorithms; we will explain their use later; lines 10-12 contain the declaration of
the populations needed to implement NSGA-II: the current population, an offspring population, and an
auxiliary population used to join the other two. Next, we find the three genetic operators (lines 14-
16) and a Distance object (from package jmetal.util), which will be used to calculate the crowding
distance.
Once we have declared all the needed objects, we proceed to initialize them (Listing 3.9). The
parameters populationSize and maxEvaluations are input parameters whose values are obtained in
lines 23-24; the same applies to indicators, although this parameter is optional (the other two are
required). The population and the counter of evaluations are initialized next (lines 28-29), and finally
the mutation, crossover, and selection operators are obtained (lines 34-36).
The initial population is initialized in the loop included in Lisrting 3.10. We can observe how new
solutions are created, evaluated, and inserted into the population.
The main loop of the algorithm is included in the piece of code contained in Listing 3.11. We can
observe the inner loop performing the generations (lines 55-71), where the genetic operators are applied.
The number of iterations of this loop is populationSize/2 because it is assumed that the crossover
returns two solutions; in the case of using a crossover operator returning only one solution, the sentence
in line 55 should be modified accordingly.
After the offspring population has been filled, the next step in NSGA-II is to apply ranking and
crowding to the union of the current and offspring populations to select the new individuals in the next
generation. The code is included in Listing 3.12, which basically follows the algorithm described in [6].
24 CHAPTER 3. ARCHITECTURE

1 p u b l i c S o l u t i o n S e t e x e c u t e ( ) throws JMException , ClassNotFoundException {


2 int populationSize ;
3 i n t maxEvaluations ;
4 int evaluations ;
5
6 Q u a l i t y I n d i c a t o r i n d i c a t o r s ; // Q u a l i t y I n d i c a t o r o b j e c t
7 i n t r e q u i r e d E v a l u a t i o n s ; // Use i n t h e example o f u s e o f t h e
8 // i n d i c a t o r s o b j e c t ( s e e below )
9
10 SolutionSet population ;
11 SolutionSet offspringPopulation ;
12 S o l u t i o n S e t union ;
13
14 Operator mutationOperator ;
15 Operator c r o s s o v e r O p e r a t o r ;
16 Operator s e l e c t i o n O p e r a t o r ;
17
18 D i s t a n c e d i s t a n c e = new D i s t a n c e ( ) ;
19 ...

Listing 3.8: execute() method: declaring objects.

20 ...
21
22 // Read t h e p a r a m e t e r s
23 populationSize = ( ( I n t e g e r ) getInputParameter ( ” populationSize ” ) ) . intValue ( ) ;
24 maxEvaluations = ( ( I n t e g e r ) g e t I n p u t P a r a m e t e r ( ” maxEvaluations ” ) ) . i n t V a l u e ( ) ;
25 i n d i c a t o r s = ( Q u a l i t y I n d i c a t o r ) getInputParameter ( ” i n d i c a t o r s ” ) ;
26
27 // I n i t i a l i z e t h e v a r i a b l e s
28 p o p u l a t i o n = new S o l u t i o n S e t ( p o p u l a t i o n S i z e ) ;
29 evaluations = 0;
30
31 requiredEvaluations = 0;
32
33 // Read t h e o p e r a t o r s
34 m u t a t i o n O p e r a t o r = o p e r a t o r s . g e t ( ” mutation ” ) ;
35 crossoverOperator = operators . get ( ” crossover ” ) ;
36 selectionOperator = operators . get ( ” s e l e c t i o n ” ) ;
37 ...

Listing 3.9: execute() method: initializing objects.

38 ...
39 // C r e a t e t h e i n i t i a l s o l u t i o n S e t
40 Solution newSolution ;
41 f o r ( i n t i = 0 ; i < p o p u l a t i o n S i z e ; i ++) {
42 n e w S o l u t i o n = new S o l u t i o n ( p r o b l e m ) ;
43 problem . e v a l u a t e ( newSolution ) ;
44 problem . e v a l u a t e C o n s t r a i n t s ( newSolution ) ;
45 e v a l u a t i o n s ++;
46 p o p u l a t i o n . add ( n e w S o l u t i o n ) ;
47 } // f o r
48 ...

Listing 3.10: execute() method: initializing the population.


3.3. CASE STUDY: NSGA-II 25

49 // G e n e r a t i o n s
50 w h i l e ( e v a l u a t i o n s < maxEvaluations ) {
51
52 // C r e a t e t h e o f f S p r i n g s o l u t i o n S e t
53 o f f s p r i n g P o p u l a t i o n = new S o l u t i o n S e t ( p o p u l a t i o n S i z e ) ;
54 S o l u t i o n [ ] p a r e n t s = new S o l u t i o n [ 2 ] ;
55 f o r ( i n t i = 0 ; i < ( p o p u l a t i o n S i z e / 2 ) ; i ++) {
56 i f ( e v a l u a t i o n s < maxEvaluations ) {
57 // o b t a i n p a r e n t s
58 parents [ 0 ] = ( Solution ) selectionOperator . execute ( population ) ;
59 parents [ 1 ] = ( Solution ) selectionOperator . execute ( population ) ;
60 Solution [ ] offSpring = ( Solution [ ] ) crossoverOperator . execute ( parents ) ;
61 mutationOperator . execute ( o f f S p r i n g [ 0 ] ) ;
62 mutationOperator . execute ( o f f S p r i n g [ 1 ] ) ;
63 problem . e v a l u a t e ( o f f S p r i n g [ 0 ] ) ;
64 problem . e v a l u a t e C o n s t r a i n t s ( o f f S p r i n g [ 0 ] ) ;
65 problem . e v a l u a t e ( o f f S p r i n g [ 1 ] ) ;
66 problem . e v a l u a t e C o n s t r a i n t s ( o f f S p r i n g [ 1 ] ) ;
67 o f f s p r i n g P o p u l a t i o n . add ( o f f S p r i n g [ 0 ] ) ;
68 o f f s p r i n g P o p u l a t i o n . add ( o f f S p r i n g [ 1 ] ) ;
69 e v a l u a t i o n s += 2 ;
70 } // i f
71 } // f o r

Listing 3.11: execute() method: main loop.

The piece of code in Listing 3.13 illustrates the use of quality indicators inside a metaheuristic.
In particular, it shows the code we used in [29] to study the convergence speed of multi-objective
metaheuristics. As we commented before, if the indicator object was specified as input parameter
(otherwise, it would be null - line 120), we apply it to test whether the hypervolume of the new population,
at the end of each generation, is equal of greater than the 98% of the hypervolume of the true Pareto
front (see [29] for further details). In case of success, the variable requiredEvaluations is assigned
the current number of function evaluations (line 124). Once this variable is not zero, we do not need to
carry out the test any more; that is the reason of including the condition in line 121.
The last sentences of the execute() method are included in Listing 3.14. In line 129 we can observe
that the variable requiredEvaluations is returned as an output parameter. Finally, we apply ranking to
the resulting population to return only non-dominated solutions (lines 132-133).

3.3.2 Class NSGAII main


In this section we describe the NSGAII main.java program, used to ejecute NSGA-II. The file is located in
jmetal/metaheuristics/nsgaII, as it is indicated in line 22 in the piece of code included in Listing 3.15,
which contains the import section of the program. The logging classes (lines 39-40) are needed to use a
logger object, which allows us to log error messages.
The code in Listing 3.16 contains the declaration of the main() method. In the implementation we
provide, there are three ways of invoking the program:

• jmetal.metaheuristics.nsgaII.NSGAII main: the program is invoked without arguments. In


this case, a default problem is solved.

• jmetal.metaheuristics.nsgaII.NSGAII main problemName: this is the choice to indicate the


problem to solve. The problem name must fit with those in the package jmetal.problems (e.g.,
Kursawe, ZDT4, DTLZ5, WFG1, etc.).

• jmetal.metaheuristics.nsgaII.NSGAII main problemName paretoFronFile: If we provide a


file containing the Pareto front of the problem to solve, a QualityIndicator object will be created,
26 CHAPTER 3. ARCHITECTURE

72
73 // C r e a t e t h e s o l u t i o n S e t union o f s o l u t i o n S e t and o f f S p r i n g
74 union = ( ( S o l u t i o n S e t ) p o p u l a t i o n ) . union ( o f f s p r i n g P o p u l a t i o n ) ;
75
76 // Ranking t h e union
77 Ranking r a n k i n g = new Ranking ( union ) ;
78
79 i n t remain = p o p u l a t i o n S i z e ;
80 i n t index = 0 ;
81 SolutionSet front = null ;
82 population . c l e a r () ;
83
84 // Obtain t h e n e x t f r o n t
85 f r o n t = ranking . getSubfront ( index ) ;
86
87 w h i l e ( ( remain > 0 ) && ( remain >= f r o n t . s i z e ( ) ) ) {
88 // A s s i g n c r o w d i n g d i s t a n c e t o i n d i v i d u a l s
89 d i s t a n c e . crowdingDistanceAssignment ( front , problem . getNumberOfObjectives ( ) ) ;
90 //Add t h e i n d i v i d u a l s o f t h i s f r o n t
91 f o r ( i n t k = 0 ; k < f r o n t . s i z e ( ) ; k++) {
92 p o p u l a t i o n . add ( f r o n t . g e t ( k ) ) ;
93 } // f o r
94
95 // Decrement remain
96 remain = remain − f r o n t . s i z e ( ) ;
97
98 // Obtain t h e n e x t f r o n t
99 i n d e x ++;
100 i f ( remain > 0 ) {
101 f r o n t = ranking . getSubfront ( index ) ;
102 } // i f
103 } // w h i l e
104
105 // Remain i s l e s s than f r o n t ( i n d e x ) . s i z e , i n s e r t o n l y t h e b e s t one
106 i f ( remain > 0 ) { // f r o n t c o n t a i n s i n d i v i d u a l s t o i n s e r t
107 d i s t a n c e . crowdingDistanceAssignment ( front , problem . getNumberOfObjectives ( ) ) ;
108 f r o n t . s o r t ( new CrowdingComparator ( ) ) ;
109 f o r ( i n t k = 0 ; k < remain ; k++) {
110 p o p u l a t i o n . add ( f r o n t . g e t ( k ) ) ;
111 } // f o r
112
113 remain = 0 ;
114 } // i f

Listing 3.12: execute() method: ranking and crowding.

115 ...
116 // This p i e c e o f code shows how t o u s e t h e i n d i c a t o r o b j e c t i n t o t h e code
117 // o f NSGA−I I . I n p a r t i c u l a r , i t f i n d s t h e number o f e v a l u a t i o n s r e q u i r e d
118 // by t h e a l g o r i t h m t o o b t a i n a P a r e t o f r o n t with a hypervolume h i g h e r
119 // than t h e hypervolume o f t h e t r u e P a r e t o f r o n t .
120 i f ( ( i n d i c a t o r s != n u l l ) &&
121 ( r e q u i r e d E v a l u a t i o n s == 0 ) ) {
122 d o u b l e HV = i n d i c a t o r s . getHypervolume ( p o p u l a t i o n ) ;
123 i f (HV >= ( 0 . 9 8 ∗ i n d i c a t o r s . getTrueParetoFrontHypervolume ( ) ) ) {
124 requiredEvaluations = evaluations ;
125 } // i f
126 } // i f

Listing 3.13: execute() method: using the hyper volume quality indicator.
3.3. CASE STUDY: NSGA-II 27

127 ...
128 // Return a s ou tp ut p a r a m e t e r t h e r e q u i r e d e v a l u a t i o n s
129 se t Ou tp ut Pa ram e te r ( ” e v a l u a t i o n s ” , r e q u i r e d E v a l u a t i o n s ) ;
130
131 // Return t h e f i r s t non−dominated f r o n t
132 Ranking r a n k i n g = new Ranking ( p o p u l a t i o n ) ;
133 return ranking . getSubfront (0) ;
134 } // e x e c u t e

Listing 3.14: execute() method: end of the method.

1 // NSGAII main . j a v a
2 //
3 // Author :
4 // Antonio J . Nebro <a n t o n i o @ l c c . uma . es>
5 // Juan J . D u r i l l o <d u r i l l o @ l c c . uma . es>
6 //
7 // C o p y r i g h t ( c ) 2011 Antonio J . Nebro , Juan J . D u r i l l o
8 //
9 // This program i s f r e e s o f t w a r e : you can r e d i s t r i b u t e i t and / o r modify
10 // i t under t h e terms o f t h e GNU L e s s e r G e n e r a l P u b l i c L i c e n s e a s p u b l i s h e d by
11 // t h e Fr e e S o f t w a r e Foundation , e i t h e r v e r s i o n 3 o f t h e L i c e n s e , o r
12 // ( a t your o p t i o n ) any l a t e r v e r s i o n .
13 //
14 // This program i s d i s t r i b u t e d i n t h e hope t h a t i t w i l l be u s e f u l ,
15 // but WITHOUT ANY WARRANTY; w i t h o u t even t h e i m p l i e d warranty o f
16 // MERCHANTABILITY o r FITNESS FOR A PARTICULAR PURPOSE. See t h e
17 // GNU L e s s e r G e n e r a l P u b l i c L i c e n s e f o r more d e t a i l s .
18 //
19 // You s h o u l d have r e c e i v e d a copy o f t h e GNU L e s s e r G e n e r a l P u b l i c L i c e n s e
20 // a l o n g with t h i s program . I f not , s e e <h t t p : / /www. gnu . o r g / l i c e n s e s / >.
21
22 package j m e t a l . m e t a h e u r i s t i c s . n s g a I I ;
23
24 im po rt jmetal . core . ∗ ;
25 im po rt jmetal . operators . crossover . ∗ ;
26 im po rt jmetal . o p e r a t o r s . mutation . ∗ ;
27 im po rt jmetal . operators . selection . ∗ ;
28 im po rt jmetal . problems . ∗ ;
29 im po rt jmetal . p r o b l e m s . DTLZ . ∗ ;
30 im po rt jmetal . p r o b l e m s .ZDT . ∗ ;
31 im po rt jmetal . p r o b l e m s .WFG. ∗ ;
32 im po rt jmetal . p r o b l e m s . LZ09 . ∗ ;
33
34 im po rt jmetal . u t i l . Configuration ;
35 im po rt j m e t a l . u t i l . JMException ;
36 im po rt j a v a . i o . IOException ;
37 im po rt java . u t i l .∗ ;
38
39 im po rt j a v a . u t i l . l o g g i n g . F i l e H a n d l e r ;
40 im po rt j a v a . u t i l . l o g g i n g . Logger ;
41
42 im po rt j m e t a l . q u a l i t y I n d i c a t o r . Q u a l i t y I n d i c a t o r ;

Listing 3.15: NSGAII main: importing packages.


28 CHAPTER 3. ARCHITECTURE

44 p u b l i c c l a s s NSGAII main {
45 p u b l i c s t a t i c Logger logger ; // Logger o b j e c t
46 public s t a t i c FileHandler f il e Ha nd le r ; // F i l e H a n d l e r o b j e c t
47
48 /∗ ∗
49 ∗ @param a r g s Command l i n e arguments .
50 ∗ @throws JMException
51 ∗ @throws IOException
52 ∗ @throws S e c u r i t y E x c e p t i o n
53 ∗ Usage : t h r e e o p t i o n s
54 ∗ − j m e t a l . m e t a h e u r i s t i c s . n s g a I I . NSGAII main
55 ∗ − j m e t a l . m e t a h e u r i s t i c s . n s g a I I . NSGAII main problemName
56 ∗ − j m e t a l . m e t a h e u r i s t i c s . n s g a I I . NSGAII main problemName p a r e t o F r o n t F i l e
57 ∗/
58 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws

Listing 3.16: NSGAII main: main method.

and the program will calculate a number of quality indicator values at the end of the execution of
the algorithm. This option is also a requirement to used quality indicators inside the algorithms.
Listing 3.17 contains the code used to declare the objects required to execute the algorithm (lines
59-63). The logger object is initialized in lines 69-72, and the log messages will be written in a file
named ”NSGAII main.log”. The sentences included between lines 74 and 92 process the arguments of
the main() method. The default problem to solve is indicated after line 85. The key point here is that,
at end of this block of sentences, an instance of the Problem class must be obtained. This is the only
argument needed to create an instance of the algorithm, as we can see in line 94. The next line contains
the sentence that should be used if we intend to use the steady-state version of NSGA-II which is also
included in jMetal.
Once an object representing the algorithm to run has been created, it must be configured. In the
code included in Listing 3.18, the input parameters are set in lines 97-98, the crossover and mutation
operators are specified in lines 101-109, and the selection operator is chosen in line 113. Once the
operators have been specified, they are added to the algorithm object in lines 116-118. The sentence in
line 121 sets the indicator object as input parameter.
When the algorithm has been configured, it is executed by invoking its execute() method (line 125
in Listing 3.19). When it has finished, the running time is reported, and the obtained solutions and
their objectives values are stored in two files (lines 131 and 133). Finally, if the indicator object is not
null, a number of quality indicators are calculated (lines 136-141) and printed, as well as the number of
evaluations returned by the algorithm as an output parameter.
3.3. CASE STUDY: NSGA-II 29

59 Problem problem ; // The problem t o s o l v e


60 Algorithm algorithm ; // The a l g o r i t h m t o u s e
61 Operator crossover ; // Crossover operator
62 Operator mutation ; // Mutation o p e r a t o r
63 Operator selection ; // Selection operator
64
65 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
66
67 Q u a l i t y I n d i c a t o r i n d i c a t o r s ; // O b j e c t t o g e t q u a l i t y i n d i c a t o r s
68
69 // Logger o b j e c t and f i l e t o s t o r e l o g m e s s a g e s
70 logger = Configuration . logger ;
71 f i l e H a n d l e r = new F i l e H a n d l e r ( ”NSGAII main . l o g ” ) ;
72 l o g g e r . addHandler ( f i l e H a n d l e r ) ;
73
74 indicators = null ;
75 i f ( a r g s . l e n g t h == 1 ) {
76 O b j e c t [ ] params = { ” Real ” } ;
77 problem = ( new ProblemFactory ( ) ) . getProblem ( a r g s [ 0 ] , params ) ;
78 } // i f
79 e l s e i f ( a r g s . l e n g t h == 2 ) {
80 O b j e c t [ ] params = { ” Real ” } ;
81 problem = ( new ProblemFactory ( ) ) . getProblem ( a r g s [ 0 ] , params ) ;
82 i n d i c a t o r s = new Q u a l i t y I n d i c a t o r ( problem , a r g s [ 1 ] ) ;
83 } // i f
84 e l s e { // D e f a u l t problem
85 problem = new Kursawe ( ” Real ” , 3 ) ;
86 // problem = new Kursawe ( ” B i n a r y R e a l ” , 3 ) ;
87 // problem = new Water ( ” Real ” ) ;
88 // problem = new ZDT1( ” ArrayReal ” , 1 0 0 ) ;
89 // problem = new ConstrEx ( ” Real ” ) ;
90 // problem = new DTLZ1( ” Real ” ) ;
91 // problem = new OKA2( ” Real ” ) ;
92 } // e l s e
93
94 a l g o r i t h m = new NSGAII ( problem ) ;
95 // a l g o r i t h m = new ssNSGAII ( problem ) ;

Listing 3.17: NSGAII main: declaring objects, processing the arguments of main(), and creating the
algorithm.
30 CHAPTER 3. ARCHITECTURE

96 // Algorithm p a r a m e t e r s
97 algorithm . setInputParameter ( ” populationSize ” ,100) ;
98 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , 2 5 0 0 0 ) ;
99
100 // Mutation and C r o s s o v e r f o r Real c o d i f i c a t i o n
101 p a r a m e t e r s = new HashMap ( ) ;
102 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 0 . 9 ) ;
103 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , 2 0 . 0 ) ;
104 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” SBXCrossover ” , p a r a m e t e r s ) ;
105
106 p a r a m e t e r s = new HashMap ( ) ;
107 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 1 . 0 / problem . g et N um be rO fV ar ia bl es ( ) ) ;
108 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , 2 0 . 0 ) ;
109 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” P o l y n o m i a l M u t a t i o n ” , p a r a m e t e r s ) ;
110
111 // S e l e c t i o n O p e r a t o r
112 parameters = n u l l ;
113 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
114
115 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
116 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
117 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
118 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
119
120 // Add t h e i n d i c a t o r o b j e c t t o t h e a l g o r i t h m
121 algorithm . setInputParameter ( ” i n d i c a t o r s ” , i n d i c a t o r s ) ;

Listing 3.18: NSGAII main: configuring the algorithm

122 ...
123 // Execute t h e Algorithm
124 l o n g i n i t T i m e = System . c u r r e n t T i m e M i l l i s ( ) ;
125 SolutionSet population = algorithm . execute () ;
126 l o n g estimatedTime = System . c u r r e n t T i m e M i l l i s ( ) − i n i t T i m e ;
127
128 // R e s u l t m e s s a g e s
129 l o g g e r . i n f o ( ” T o t a l e x e c u t i o n time : ”+estimatedTime + ”ms” ) ;
130 l o g g e r . i n f o ( ” V a r i a b l e s v a l u e s have been w r i t e n t o f i l e VAR” ) ;
131 p o p u l a t i o n . p r i n t V a r i a b l e s T o F i l e ( ”VAR” ) ;
132 l o g g e r . i n f o ( ” O b j e c t i v e s v a l u e s have been w r i t e n t o f i l e FUN” ) ;
133 p o p u l a t i o n . p r i n t O b j e c t i v e s T o F i l e ( ”FUN” ) ;
134
135 i f ( i n d i c a t o r s != n u l l ) {
136 l o g g e r . i n f o ( ” Quality i n d i c a t o r s ” ) ;
137 l o g g e r . i n f o ( ” Hypervolume : ” + i n d i c a t o r s . getHypervolume ( p o p u l a t i o n ) ) ;
138 l o g g e r . i n f o ( ”GD : ” + indicators . getGD ( p o p u l a t i o n ) ) ;
139 l o g g e r . i n f o ( ”IGD : ” + indicators . getIGD ( p o p u l a t i o n ) ) ;
140 l o g g e r . i n f o ( ” Spread : ” + indicators . getSpread ( population ) ) ;
141 logger . info (” Epsilon : ” + indicators . getEpsilon ( population ) ) ;
142
143 i n t e v a l u a t i o n s = ( ( I n t e g e r ) a l g o r i t h m . getOutputParameter ( ” e v a l u a t i o n s ” ) ) . i n t V a l u e
() ;
144 l o g g e r . i n f o ( ” Speed : ” + evaluations + ” evaluations ”) ;
145 } // i f
146 } // main
147 } // NSGAII main

Listing 3.19: NSGAII main: running the algorithms and reporting results
Chapter 4

Experimentation with jMetal

In our research work, when we want to assess the performance of a multi-objective metaheuristic, we
usually compare it with other algorithms over a set of benchmark problems. After choosing the test
suites and the quality indicators to apply, we carry out a number of independent runs of each experiments
and after that we analyze the results.
Typically, we follow these steps:

1. Configure the algorithms by setting the parameter values in an associated Settings object (see
Subsection 4.1).
2. Optionally, configure the problems to solve. For example, the DTLZ problems are configured by
default with three objectives, while the WFG ones are bi-objective. If we want to modify these
default settings, we have to do it by changing them in the files defining the problems.
3. Execute a number of independent runs per each par (algorithm, problem).
4. Analyze the results. jMetal can generate Latex tables and R scripts to present the results and to
provide statistical information.

To carry out these steps, we use the jmetal.experiments package, first available in jMetal 2.0, and
this chapter is devoted mainly to explaining the use of thas package. First, we describe the structure
of the jmetal.experiments.Settings class and how it can be used to configure NSGA-II; then, we
analyze the jmetal.experiments.Main class. Finally, we illustrate with two examples the use of the
jmetal.experiments.Experiment class.

4.1 The jmetal.experiments.Settings Class


The motivation of designing this class has to do with the fact that in the traditional approach, a jMetal
metaheuristic is executed through a main class, as NSGAII main in the case of NSGA-II (see Section 3.3).
This class contains the configuration of the algorithm so, if we want to run the metaheuristic with different
parameter settings, we have to modify that file each time. This may become cumbersome, and it is a
consequence of that, by using main objects, we cannot reuse the configurations of the algorithms in an
easy way.
To face this issue, we took the alternative approach of defining the configuration of a metaheuristic
in an class which will contain the default settings and will allow to modify them. Listing 4.1 contains
the code of the jmetal.experiment.Settings class. The main features of this class are:
• Its state is represented by a Problem object (line 9), the problem name (line 10), and a string to
store the file containing the true Pareto front of the problem if quality indicators are to be applied
(line 11).

31
32 CHAPTER 4. EXPERIMENTATION WITH JMETAL

1 // S e t t i n g s . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s ;
4 ...
5 /∗ ∗
6 ∗ Class representing Settings objects .
7 ∗/
8 public abstract class Settings {
9 p r o t e c t e d Problem p r o b l e m ;
10 p r o t e c t e d S t r i n g problemName ;
11 public String paretoFrontFile ;
12
13 /∗ ∗
14 ∗ Constructor
15 ∗/
16 public Settings () {
17 } // C o n s t r u c t o r
18
19 /∗ ∗
20 ∗ Constructor
21 ∗/
22 p u b l i c S e t t i n g s ( S t r i n g problemName ) {
23 problemName = problemName ;
24 } // C o n s t r u c t o r
25
26 /∗ ∗
27 ∗ D e f a u l t c o n f i g u r e method
28 ∗ @return A problem with t h e d e f a u l t c o n f i g u r a t i o n
29 ∗ @throws j m e t a l . u t i l . JMException
30 ∗/
31 a b s t r a c t p u b l i c Algorithm c o n f i g u r e ( ) throws JMException ;
32
33 /∗ ∗
34 ∗ C o n f i g u r e method . Change t h e d e f a u l t c o n f i g u r a t i o n
35 ∗/
36 p u b l i c f i n a l Algorithm c o n f i g u r e ( HashMap s e t t i n g s ) throws JMException {
37 ...
38 } // c o n f i g u r e
39
40 /∗ ∗
41 ∗ Changes t h e problem t o s o l v e
42 ∗ @param problem
43 ∗/
44 v o i d s e t P r o b l e m ( Problem problem ) {
45 p r o b l e m = problem ;
46 } // s e t P r o b l e m
47
48 /∗ ∗
49 ∗ Returns t h e problem
50 ∗/
51 Problem getProblem ( ) {
52 return problem ;
53 }
54 } // S e t t i n g s

Listing 4.1: Settings class


4.1. THE JMETAL.EXPERIMENTS.SETTINGS CLASS 33

• The problem can be set either when creating the object (lines 22-24), either by using the method
setProblem() (lines 51-52).
• The default settings are stablished in the configure() method (line 31). This method must be
defined in the corresponding subclasses of Settings.
• The values of the parameters can be modified by using a Java HashMap object, passing it as an
argument to second definition of the configure() method (line 36).
34 CHAPTER 4. EXPERIMENTATION WITH JMETAL

4.2 An example of Setting class: NSGA-II


To illustrate the use of the Settings class, we analyze the NSGAII Settings class, which is in the package
jmetal.experiments.settings. The idea is simple: to move the parameter settings in NSGAII main
(see Section 3.3.2) to NSGAII Settings. This is depicted in Listing 4.2, where the parameters to be set
are declared in lines 9-14. The class constructor (lines 20-37), taking as argument the problem to be
solved, creates an instance of the problem (lines 23-39) and assigns the default parameter values (lines
30-36). We impose the requirement of that the parameters have to the public and their name must end
with the underscore (‘ ’) character; the reason has to do with the mechanism to modify the settings, as
is explained below.
1 // N S G A I I S e t t i n g s . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s . s e t t i n g s ;
4 ...
5 /∗ ∗
6 ∗ S e t t i n g s c l a s s o f a l g o r i t h m NSGA−I I ( r e a l e n c o d i n g )
7 ∗/
8 p u b l i c c l a s s NSGAII Settings extends S e t t i n g s {
9 public int populationSize ;
10 p u b l i c i n t maxEvaluations ;
11 p u b l i c double mutationProbability ;
12 p u b l i c double c r o s s o v e r P r o b a b i l i t y ;
13 p u b l i c double mutationDistributionIndex ;
14 p u b l i c double c r o s s o v e r D i s t r i b u t i o n I n d e x ;
15
16 /∗ ∗
17 ∗ Constructor
18 ∗ @throws JMException
19 ∗/
20 p u b l i c N S G A I I S e t t i n g s ( S t r i n g problem ) throws JMException {
21 s u p e r ( problem ) ;
22
23 O b j e c t [ ] problemParams = { ” Real ” } ;
24 try {
25 p r o b l e m = ( new ProblemFactory ( ) ) . getProblem ( problemName , problemParams ) ;
26 } c a t c h ( JMException e ) {
27 // TODO Auto−g e n e r a t e d c a t c h b l o c k
28 e . printStackTrace () ;
29 }
30 // D e f a u l t s e t t i n g s
31 populationSize = 100 ;
32 maxEvaluations = 25000 ;
33 mutationProbability = 1 . 0 / p r o b l e m . ge tN um be r Of Va ri ab le s ( ) ;
34 crossoverProbability = 0.9 ;
35 mutationDistributionIndex = 20.0 ;
36 crossoverDistributionIndex = 20.0 ;
37 } // N S G A I I S e t t i n g s
38 ...

Listing 4.2: jmetal.experiments.settings.NSGAII Settings: Default settings and constructor.

39 ...
40 /∗ ∗
41 ∗ C o n f i g u r e NSGAII with u s e r −d e f i n e d p a r a m e t e r s e t t i n g s
42 ∗ @return A NSGAII a l g o r i t h m o b j e c t
43 ∗ @throws j m e t a l . u t i l . JMException
44 ∗/
45 p u b l i c Algorithm c o n f i g u r e ( ) throws JMException {
46 Algorithm a l g o r i t h m ;
47 Selection selection ;
48 Crossover crossover ;
49 Mutation mutation ;
4.2. AN EXAMPLE OF SETTING CLASS: NSGA-II 35

50
51 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
52
53 // C r e a t i n g t h e a l g o r i t h m . There a r e two c h o i c e s : NSGAII and i t s s t e a d y −
54 // s t a t e v a r i a n t ssNSGAII
55 a l g o r i t h m = new NSGAII ( p r o b l e m ) ;
56 // a l g o r i t h m = new ssNSGAII ( p r o b l e m ) ;
57
58 // Algorithm p a r a m e t e r s
59 algorithm . setInputParameter ( ” populationSize ” , populationSize ) ;
60 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , m a x E v a l u a t i o n s ) ;
61
62 // Mutation and C r o s s o v e r f o r Real c o d i f i c a t i o n
63 p a r a m e t e r s = new HashMap ( ) ;
64 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , c r o s s o v e r P r o b a b i l i t y ) ;
65 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , c r o s s o v e r P r o b a b i l i t y ) ;
66 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” SBXCrossover ” , p a r a m e t e r s ) ;
67
68 p a r a m e t e r s = new HashMap ( ) ;
69 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , m u t a t i o n P r o b a b i l i t y ) ;
70 p a r a m e t e r s . put ( ” d i s t r i b u t i o n I n d e x ” , m u t a t i o n D i s t r i b u t i o n I n d e x ) ;
71 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” P o l y n o m i a l M u t a t i o n ” , p a r a m e t e r s ) ;
72
73 // S e l e c t i o n O p e r a t o r
74 parameters = n u l l ;
75 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
76
77 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
78 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
79 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
80 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
81 return algorithm ;
82 } // c o n f i g u r e
83 } // N S G A I I S e t t i n g s

Listing 4.3: jmetal.experiments.settings.NSGAII Settings: Configuring the algorithm.

The implementation of the configure() method is included in Listing 4.3, where we can observe
that it contains basically the same code used in the NSGAII main class to configure the algorithm.
To modify specific parameters, we make use of a Java HashMap object. The map is composed of pairs
(key, value), where the key and the value are strings. The idea is that the state variables defined in the
subclass of Settings are used as keys in the properties object. As commented before, those variables
must be public, and their identifiers must end with the underscore (‘ ’) character.
Let us illustrate this with some pieces of code:

• Creating an instance of NSGA II with the default parameter settings by using class NSGAII Settings:
1 Algorithm n s g a I I = new N S G A I I S e t t i n g s ( problem ) ;

• Let us modify the crossover probability, which is set in the crossoverProbability (Listing 4.2,
line 34) to 1.0 (the default value is 0.9):
1 HashMap p a r a m e t e r s = new HashMap ( ) ;
2 p a r a m e t e r s . put ( ” c r o s s o v e r P r o b a b i l i t y ” , 1 . 0 ) ;
3 Algorithm n s g a I I = new N S G A I I S e t t i n g s ( problem ) . c o n f i g u r e ( p a r a m e t e r s ) ;

• The algorithm can be executed now:


1 SolutionSet resultPopulation = nsgaII . execute () ;
36 CHAPTER 4. EXPERIMENTATION WITH JMETAL

An example of using of this feature can be found in Subsection 4.4.1.


In jMetal 4.5, the jmetal.experiments.settings package contains setting classes to a number
of metaheuristics, including: AbYSS, CellDE, cMOEAD, GDE3, IBEA, MOCell, MOCHC, MOEAD,
MOEAD DRA, NSGAII, OMOPSO, PAES, RandomSearch, SMPSO, SMPSOhv, SMSEMOA, and SPEA2.
Furthermore, we include the classes for pNSGAII and pSMPSO (parallel versions of NSGAII and
SMPSO, respectively), and NSGAIIBinary and NSGAIIPermutation (configurations of NSGAII to work
with binary and permutation encodings).
4.3. THE JMETAL.EXPERIMENTS.MAIN CLASS 37

4.3 The jmetal.experiments.Main class


The use of Settings objects in jMetal allows to have a unique program to run the algorithms. This
program is defined in class jmetal.metaheuristics.Main. If we take a look to this class (see Listing 4.4
the three ways to run the program (lines 15-17), where the only required argument is the algorithm name.
This name must be the prefix of the corresponding settings class (e.g., NSGAII, GDE3, etc.). The second
argument is the problem name (e.g., ZDT1, DTLZ3, etc.) and the third one is the name of the file
containing the Pareto front of the problem. In case of indicating the three arguments, the program
calculates and displays the value of a number of quality indicators (lines 45-58) that will be applied to
the obtained front.
An example of use is the following:

% java jmetal.experiments.Main NSGAII ZDT1


05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Total execution time: 3965ms
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Objectives values have been writen to file FUN
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Variables values have been writen to file VAR
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Quality indicators
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Hypervolume: 0.6590761194336173
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: GD : 2.828645886294944E-4
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: IGD : 2.1542653967708837E-4
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Spread : 0.4153061260894926
05-dic-2008 15:22:34 jmetal.experiments.Main main
INFO: Epsilon : 0.018577848537497554
38 CHAPTER 4. EXPERIMENTATION WITH JMETAL

1 // Main . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s ;
4 ...
5 /∗ ∗
6 ∗ Class f o r running algorithms
7 ∗/
8 p u b l i c c l a s s Main {
9 /∗ ∗
10 ∗ @param a r g s Command l i n e arguments .
11 ∗ @throws JMException
12 ∗ @throws IOException
13 ∗ @throws S e c u r i t y E x c e p t i o n
14 ∗ Usage : t h r e e o p t i o n s
15 ∗ − j m e t a l . e x p e r i m e n t s . Main algorithmName
16 ∗ − j m e t a l . e x p e r i m e n t s . Main algorithmName problemName
17 ∗ − j m e t a l . e x p e r i m e n t s . Main algorithmName problemName p a r e t o F r o n t F i l e
18 ∗ @throws ClassNotFoundException
19 ∗/
20 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws
21 JMException , . . . {
22 Problem problem ; // The problem t o s o l v e
23 Algorithm a l g o r i t h m ; // The a l g o r i t h m t o u s e
24
25 Q u a l i t y I n d i c a t o r i n d i c a t o r s ; // O b j e c t t o g e t q u a l i t y i n d i c a t o r s
26
27 Settings settings = null ;
28
29 S t r i n g algorithmName = ”” ;
30 S t r i n g problemName = ” Kursawe ” ; // D e f a u l t problem
31 S t r i n g p a r e t o F r o n t F i l e = ”” ;
32 ...
33 // Execute t h e Algorithm
34 l o n g i n i t T i m e = System . c u r r e n t T i m e M i l l i s ( ) ;
35 SolutionSet population = algorithm . execute () ;
36 l o n g estimatedTime = System . c u r r e n t T i m e M i l l i s ( ) − i n i t T i m e ;
37
38 // R e s u l t m e s s a g e s
39 l o g g e r . i n f o ( ” T o t a l e x e c u t i o n time : ”+estimatedTime + ”ms” ) ;
40 l o g g e r . i n f o ( ” O b j e c t i v e s v a l u e s have been w r i t e n t o f i l e FUN” ) ;
41 p o p u l a t i o n . p r i n t O b j e c t i v e s T o F i l e ( ”FUN” ) ;
42 l o g g e r . i n f o ( ” V a r i a b l e s v a l u e s have been w r i t e n t o f i l e VAR” ) ;
43 p o p u l a t i o n . p r i n t V a r i a b l e s T o F i l e ( ”VAR” ) ;
44
45 i f ( i n d i c a t o r s != n u l l ) {
46 l o g g e r . i n f o ( ” Quality i n d i c a t o r s ” ) ;
47 l o g g e r . i n f o ( ” Hypervolume : ” + i n d i c a t o r s . getHypervolume ( p o p u l a t i o n ) ) ;
48 l o g g e r . i n f o ( ”GD : ” + indicators . getGD ( p o p u l a t i o n ) ) ;
49 l o g g e r . i n f o ( ”IGD : ” + indicators . getIGD ( p o p u l a t i o n ) ) ;
50 l o g g e r . i n f o ( ” Spread : ” + indicators . getSpread ( population ) ) ;
51 logger . info (” Epsilon : ” + indicators . getEpsilon ( population ) ) ;
52
53 i f ( a l g o r i t h m . getOutputParameter ( ” e v a l u a t i o n s ” ) != n u l l ) {
54 I n t e g e r e v a l s = ( I n t e g e r ) a l g o r i t h m . getOutputParameter ( ” e v a l u a t i o n s ” ) ;
55 int evaluations = ( Integer ) evals . intValue () ;
56 l o g g e r . i n f o ( ” Speed : ” + evaluations + ” evaluations ”) ;
57 } // i f
58 } // i f
59 } // Main
60 } // Main

Listing 4.4: jmetal.experiments.Main class


4.4. EXPERIMENTATION EXAMPLE: NSGAIISTUDY 39

4.4 Experimentation Example: NSGAIIStudy


jMetal includes the jmetal.experiments.Experiment class, which is intended to help in making ex-
perimentation studies of algorithms. In its current state, it allows to indicate: the metaheuristics to
run, the problems to solve, the quality indicators to apply, and the number of independent runs to carry
out. As a result, it generates a directory with all the obtained approximation sets and quality indicators
values and, depending on the user preferences:
• A latex file containing tables with means and medians of the obtained measures.
• R scripts to produce boxplots of the results.
• R scripts to generate latex tables with the application of the Wilcoxon statistical test to the results.
• Latex tables containing the values of the Friedman test.
In this section, we illustrate how to use this class by detailing the code of jmetal.experiments.studies.NSGAIIStudy,
a subclass of Experiment aimed at studying the effect of varying the crossover probability in NSGA-II.
In concrete, we want to study four probability values: 1.0, 0.9, 0.8, and 0.7. Let us recall that this is
only an example; we are not particularly interested in the results of this study.

4.4.1 Defining the experiment


We enumerate the steps to follow in order to define our own Experiment subclass:
1 // NSGAIIStudy . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s . s t u d i e s ;
4 ...
5 /∗ ∗
6 ∗ C l a s s i m p l e m e n t i n g an example o f e x p e r i m e n t u s i n g NSGA−I I a s b a s e a l g o r i t h m .
7 ∗ The e x p e r i m e n t c o n s i s t i n g i n s t u d y i n g t h e e f f e c t o f t h e c r o s s o v e r p r o b a b i l i t y
8 ∗ i n NSGA−I I .
9 ∗/
10 p u b l i c c l a s s NSGAIIStudy e x t e n d s Experiment {
11 ...

1. A method called algorithmSettings must be implemented:


12 ...
13 /∗ ∗
14 ∗ C o n f i g u r e s t h e a l g o r i t h m s i n each i n d e p e n d e n t run
15 ∗ @param problem The problem t o s o l v e
16 ∗ @param problemIndex
17 ∗ @param a l g o r i t h m Array c o n t a i n i n g t h e a l g o r i t h m s t o run
18 ∗ @throws ClassNotFoundException
19 ∗/
20 p u b l i c s y n c h r o n i z e d v o i d a l g o r i t h m S e t t i n g s ( S t r i n g problemName ,
21 i n t problemIndex ,
22 Algorithm [ ] a l g o r i t h m )
23 throws ClassNotFoundException {
24 ...

This method is invoked automatically in each independent run, por each problem and algorithm.
The key is that a Settings object with the desired parameterization has to be created in order to
get the Algorithm to be executed:
25 ...
26 try {
27 i n t numberOfAlgorithms = a l g o r i t h m N a m e L i s t . l e n g t h ;
28
40 CHAPTER 4. EXPERIMENTATION WITH JMETAL

29 HashMap [ ] p a r a m e t e r s = new HashMap [ numberOfAlgorithms ] ;


30
31 f o r ( i n t i = 0 ; i < numberOfAlgorithms ; i ++) {
32 p a r a m e t e r s [ i ] = new HashMap ( ) ;
33 } // f o r
34
35 parameters [0]. put ( ” c r o s s o v e r P r o b a b i l i t y ”, 1.0) ;
36 parameters [1]. put ( ” c r o s s o v e r P r o b a b i l i t y ”, 0.9) ;
37 parameters [2]. put ( ” c r o s s o v e r P r o b a b i l i t y ”, 0.8) ;
38 parameters [3]. put ( ” c r o s s o v e r P r o b a b i l i t y ”, 0.7) ;
39
40 if ( ( ! p a r e t o F r o n t F i l e [ problemIndex ] . e q u a l s ( ” ” ) ) | |
41 ( p a r e t o F r o n t F i l e [ problemIndex ] == n u l l ) ) {
42 f o r ( i n t i = 0 ; i < numberOfAlgorithms ; i ++)
43 p a r a m e t e r s [ i ] . put ( ” p a r e t o F r o n t F i l e ” , p a r e t o F r o n t F i l e [ problemIndex ] ) ;
44 } // i f
45
46 f o r ( i n t i = 0 ; i < numberOfAlgorithms ; i ++)
47 a l g o r i t h m [ i ] = new N S G A I I S e t t i n g s ( problemName ) . c o n f i g u r e ( p a r a m e t e r s [ i ] ) ;
48 ...
49 } // a l g o r i t h m S e t t i n g s
50 ...

In this example, as we are interested in four configurations of NSGA-II, with four different crossover
probabilities, we define a Java HashMap object per algorithm (line 29) to indicate the desired values
(lines 35-38). The code between lines 40-44 is used to incorporate the names of the Pareto front
files if they are specified. Finally, the Algorithm objects are created and configure and they are
ready to be executed (lines 46,47).

2. Once we have defined the algorithmSettings method, we have to do the same with the main
method. First, an object of the NSGAIIStudy must be created:
51 ...
52 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws JMException , IOException {
53 NSGAIIStudy exp = new NSGAIIStudy ( ) ;
54 ...

3. We need to give a name to the experiment (note: take into account that this name will be used
to generate Latex tables, so you should avoid using the underscore symbol ’ ’). In this case, we
choose the same name of the class, ”NSGAIIStudy”.
55 ...
56 exp . experimentName = ”NSGAIIStudy” ;
57 ...

4. We have to indicate: the names of the algorithms, the problems to solve, the names of the files
containing the Pareto fronts, and a list of the quality indicators to apply:
58 ...
59 exp . a l g o r i t h m N a m e L i s t = new S t r i n g [ ] { ”NSGAIIa” , ”NSGAIIb” , ”NSGAIIc” , ”
NSGAIId” } ;
60 exp . p r o b l e m L i s t = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”DTLZ1” , ”WFG2
”} ;
61 exp . p a r e t o F r o n t F i l e = new S t r i n g [ ] { ”ZDT1 . p f ” , ”ZDT2 . p f ” , ”ZDT3 . p f ” , ”ZDT4 . p f ” ,
”DTLZ1 . 2D. p f ” , ”WFG2. 2D. p f ” } ;
62 exp . i n d i c a t o r L i s t = new S t r i n g [ ] { ”HV” , ”SPREAD” , ”IGD” , ”EPSILON” } ;
63 ...

The algorithm names are merely tags that will be used to create the output directories and the
tables. The problem names must be the same used in jmetal.problems. We must note that:
4.4. EXPERIMENTATION EXAMPLE: NSGAIISTUDY 41

• The order of the names of the Pareto front files must be the same as the name of the problems
in the problem list.
• If we use the names of the Pareto front files that can be found in the jMetal Web site, when
indicating a DTLZ problem (as DTLZ1), we must indicate the 2D file (DTLZ1.2D.pf) if we
intend to solve it using a bi-objective formulation. Furthermore, we have to modify the
problem classes, as DTLZ1.java, to indicate two objectives.
The same holds if we want to solve the WFG problems: by default they are defined as bi-objective,
so they have to be modified to solved them with more objectives.
5. The next step is to indicate the output directory and the directory where the Pareto front files are
located:
64 ...
65 exp . e x p e r i m e n t B a s e D i r e c t o r y = ” / U s e r s / a n t o n i o / Softw / p r u e b a s / j m e t a l / ” +
66 exp . experimentName ;
67 exp . p a r e t o F r o n t D i r e c t o r y = ” / U s e r s / a n t o n i o / Softw / p r u e b a s / data / p a r e t o F r o n t s ” ;
68 ...

6. Once everything is configured, the array containing the Settings of the algorithms must be ini-
tialized:
69 ...
70 exp . a l g o r i t h m S e t t i n g s = new S e t t i n g s [ numberOfAlgorithms ] ;
71 ...

7. The number of independent runs has to be specified (30 in this example):


72 ...
73 exp . i n d e p e n d e n t R u n s = 30 ;
74 ...

8. The experiment has to be initialized as follows:


75 ...
76 exp . i n i t E x p e r i m e n t ( ) . ;
77 ...

9. Finally, we execute the algorithms. The runExperiment() method has an optional parameter (the
default value is 1) indicating the number of threads to be created to run the experiments (see
Section 4.8 for further details):
78 ...
79 // Run t h e e x p e r i m e n t s
80 i n t numberOfThreads ;
81 exp . runExperiment ( numberOfThreads = 4 ) ;
82 ...

10. Optionally, we may be interested in generating Latex tables and statistical information of the
obtained results. Latex tables are produced by the following command:
83 ...
84 // G e n e r a t e l a t e x t a b l e s
85 exp . g e n e r a t e L a t e x T a b l e s ( ) ;
86 ...

In case of being interested in getting boxplots, it is possible to obtain R scripts to generate them.
In that case, you need to invoke the generateRBoxplotScripts() method:
42 CHAPTER 4. EXPERIMENTATION WITH JMETAL

87 ...
88 // C o n f i g u r e t h e R s c r i p t s t o be g e n e r a t e d
89 i n t rows ;
90 i n t columns ;
91 String prefix ;
92 S t r i n g [ ] problems ;
93
94 rows = 2 ;
95 columns = 3 ;
96 p r e f i x = new S t r i n g ( ” Problems ” ) ;
97 p r o b l e m s = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”DTLZ1” , ”WFG2” } ;
98
99 b o o l e a n notch ;
100 exp . g e n e r a t e R B o x p l o t S c r i p t s ( rows , columns , problems , p r e f i x , notch = t r u e , exp )
;
101 ...

This method generates R scripts which produce .eps files containing rows × columns boxplots of
the list of problems passed as third parameter. It is necessary to explicitly indicate the problems
to be consider in the boxplots because if there are too much problems, the resulting graphics will
be very small and difficult to see. In this situation, several calls to generateRBoxplotScripts()
can be included. The name of the scripts will start by the prefix specified in the fourth parameter
plus the name of the quality indicator, ended with the suffix ”Botxplot.R”. The last parameter
indicates whether notched boxplots should be generated or not.
Additionally, a method called generateRWilcoxonScripts() is available. This method is intended
to apply the Wilcoxon rank-sum test to the obtained results:
102 ...
103 exp . g e n e r a t e R W i l c o x o n S c r i p t s ( problems , p r e f i x , exp ) ;
104 ...

For each indicator, a file with suffix ”Wilcox.R” will be generated. Once each of these scripts is
executed, a latex file will be yielded as output. Please, see next section for further details.
Since jMetal 4.4, the Friedman test can be applied to the results of each quality indicator, as
illustrated next:
105 ...
106 // Applying t h e Friedman t e s t
107 Friedman t e s t = new Friedman ( exp ) ;
108 t e s t . e x e c u t e T e s t ( ”EPSILON” ) ;
109 t e s t . e x e c u t e T e s t ( ”HV” ) ;
110 t e s t . e x e c u t e T e s t ( ”SPREAD” ) ;

4.4.2 Running the experiments


To run the experiments, if we are using the command line we simply have to type (assuming the the
CLASSPATH variable has been configurated):
java jmetal.experiments.NSGAIIStudy
After the execution of the algorithms, we obtain the directory tree depicted in Figure 4.1. The
directories are:
• data: Output of the algorithms.
• latex: Latex file containing result tables.
• R: R scripts for generating statistical information.
4.4. EXPERIMENTATION EXAMPLE: NSGAIISTUDY 43

Figure 4.1: Output directories and files after running the experiment.

4.4.3 Analyzing the output results


As it can be observed in Figure 4.1-left, the directory named NSGAIIStudy has three directories: data,
R, and latex. The data directory contains (see Figure 4.1-right), for each algorithm, the files with
the variable values (files VAR.0, VAR.1, ...) and function values (files FUN.0, FUN.1, ...) of the obtained
approximation sets (we show four files instead of the 30 files), and the quality indicators of these solution
sets are included in the files HV, SPREAD, EPSILON, and IDG.
As the FUN.XX files store the fronts of solutions computed by the algorithms, they can be plotted to
observe the resulting approximation sets. Depending on the study you are interested in, you could also
join all of them into a single file to obtain a reference set (after removing the dominated solutions).
The latex directory contains a Latex file with the name of the experiment, NSGAIIStudy.tex. You
just need to compile the file with your favorite Latex tool. For example, you could simply type:

latex NSGAIIStudy.tex
dvipdf NSGAIIStudy.dvi

to get a pdf file. Alternatively, you could invoke the pdflatex command:

pdflatex NSGAIIStudy.tex

As an example of the obtained output, Table 4.1 includes the mean and standard deviation of the
results after applying the hypervolume indicator, and the median and interquartile range (IQR) values
are in Table 4.2.

Table 4.1: EPSILON. Mean and standard deviation


NSGAIIa NSGAIIb NSGAIIc NSGAIId
ZDT1 1.36e − 022.4e−03 1.35e − 022.1e−03 1.37e − 021.7e−03 1.39e − 022.0e−03
ZDT2 1.25e − 022.0e−03 1.40e − 022.3e−03 1.36e − 021.9e−03 1.40e − 021.5e−03
ZDT3 8.25e − 031.4e−03 8.36e − 031.6e−03 8.56e − 031.5e−03 9.11e − 031.9e−03
ZDT4 1.42e − 022.3e−03 1.52e − 022.2e−03 2.11e − 022.0e−02 1.97e − 021.2e−02
DTLZ1 7.35e − 031.3e−03 1.58e − 024.6e−02 8.76e − 031.9e−03 1.16e − 021.3e−02
WFG2 3.86e − 013.5e−01 4.32e − 013.4e−01 4.32e − 013.4e−01 3.87e − 013.5e−01
44 CHAPTER 4. EXPERIMENTATION WITH JMETAL

Table 4.2: EPSILON. Median and IQR


NSGAIIa NSGAIIb NSGAIIc NSGAIId
ZDT1 1.31e − 022.8e−03 1.33e − 022.8e−03 1.37e − 023.0e−03 1.36e − 021.6e−03
ZDT2 1.25e − 023.3e−03 1.37e − 023.0e−03 1.35e − 022.8e−03 1.37e − 022.5e−03
ZDT3 8.20e − 031.9e−03 8.36e − 032.0e−03 8.46e − 031.8e−03 8.46e − 031.5e−03
ZDT4 1.41e − 023.6e−03 1.54e − 022.8e−03 1.62e − 027.1e−03 1.74e − 023.6e−03
DTLZ1 6.89e − 032.2e−03 7.03e − 031.8e−03 8.45e − 032.2e−03 9.52e − 033.0e−03
WFG2 7.10e − 017.0e−01 7.10e − 017.0e−01 7.11e − 017.0e−01 7.10e − 017.0e−01

Table 4.3 includes the mean and standard deviation of the results after applying the hypervolume
indicator, and the median and interquartile range (IQR) values are in Table 4.4.

Table 4.3: HV. Mean and standard deviation


NSGAIIa NSGAIIb NSGAIIc NSGAIId
ZDT1 6.60e − 013.1e−04 6.59e − 013.2e−04 6.59e − 013.6e−04 6.59e − 013.2e−04
ZDT2 3.26e − 013.1e−04 3.26e − 013.4e−04 3.26e − 014.1e−04 3.25e − 013.9e−04
ZDT3 5.15e − 011.4e−04 5.15e − 011.6e−04 5.15e − 012.2e−04 5.14e − 012.0e−04
ZDT4 6.56e − 012.8e−03 6.55e − 013.2e−03 6.52e − 015.9e−03 6.52e − 013.2e−03
DTLZ1 4.87e − 014.5e−03 4.71e − 018.8e−02 4.84e − 016.8e−03 4.80e − 011.6e−02
WFG2 5.62e − 011.4e−03 5.62e − 011.4e−03 5.62e − 011.5e−03 5.62e − 011.4e−03

Table 4.4: HV. Median and IQR


NSGAIIa NSGAIIb NSGAIIc NSGAIId
ZDT1 6.60e − 013.9e−04 6.59e − 012.7e−04 6.59e − 016.3e−04 6.59e − 014.8e−04
ZDT2 3.26e − 013.4e−04 3.26e − 014.1e−04 3.26e − 014.2e−04 3.25e − 015.6e−04
ZDT3 5.15e − 011.7e−04 5.15e − 012.2e−04 5.15e − 012.8e−04 5.14e − 011.9e−04
ZDT4 6.57e − 014.4e−03 6.56e − 014.9e−03 6.55e − 016.0e−03 6.52e − 015.0e−03
DTLZ1 4.88e − 015.3e−03 4.89e − 015.8e−03 4.85e − 017.4e−03 4.83e − 011.2e−02
WFG2 5.61e − 012.6e−03 5.61e − 012.6e−03 5.61e − 012.8e−03 5.61e − 012.7e−03

An interesting issue is that it would be interesting to have a ranking of the performance of the
compared algorithms. The Friedman test can be used to get this ranking. After executing the following
sentence:

pdflatex FriedmanTestEPSILON.tex

a FriedmanTestEPSILON.pdf file containing Table 4.5 is obtained. Similar tables are produced for the
rest of quality indicators1

Table 4.5: Average Rankings of the algorithms according to the Epsilon indicador

Algorithm Ranking
NSGAIIa 1.1666666666666665
NSGAIIb 2.6666666666666665
NSGAIIc 2.8333333333333335
NSGAIId 3.3333333333333335

The R directory stores the R scripts. As commented before, the script names are composed of
the indicated prefix (”Problems” in the example), the name of the quality indicator, having the ”R”
extension. Those ending in ”Boxplot.R” yield as a results eps files containing boxplots of the values of
the indicators, while those ending in ”Wilcox.R” contain the scripts to produce latex tables including
the application of the Wilcoxon test.
To run the scripts, if you have properly installed R in your computer, you can type the following
commands:
1 The Friedman test assume that the lower the values of the indicators the better. This is true in all the indicators but

the hypervolume; in this case, the table should be interpreted accordingly.


4.4. EXPERIMENTATION EXAMPLE: NSGAIISTUDY 45

HV:ZDT1 HV:ZDT2 HV:ZDT3

0.5138 0.5140 0.5142 0.5144 0.5146 0.5148 0.5150 0.5152


0.3270
0.6600

0.3265
0.6595

0.3260
0.6590

0.3255
0.3250
0.6585

0.3245
0.6580
NSGAIIa NSGAIIc NSGAIIa NSGAIIc NSGAIIa NSGAIIc

0.660 HV:ZDT4 HV:DTLZ1 HV:WFG2

0.5

0.564
0.4
0.655

0.563
0.3
0.650

0.562
0.2
0.645

0.561
0.1
0.640

0.560
0.0

NSGAIIa NSGAIIc NSGAIIa NSGAIIc NSGAIIa NSGAIIc

Figure 4.2: Boxplots of the values obtained after applying the hypervolume quality indicator (notch =
true).

Table 4.6: ZDT1 .HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa N N N
NSGAIIb N N
NSGAIIc N

Rscript ZDT.HV.Boxlplot.R
Rscript ZDT.HV.Wilcox.R
Rscript ZDT.EPSILON.Boxplot.R
Rscript ZDT.EPSILON.Wilcox.R
...
Alternatively, if you are working with a UNIX machine, you can type:
for i in *.R ; do Rscript $i 2>/dev/null ; done
As a result, you will get the same number of files, but with the .eps extension. Figure 4.2 shows the
Problems.HV.Boxplot.eps file. Without entering into details about the results, in the notched boxplot,
if two boxes’ notches do not overlap then it is supposed with strong evidence that their medians differ,
so we could conclude that NSGAIIa provides the best overall quality indicator values in the experiment
with confidence. We can invoke the generateRBoxplotScripts() method with the notch parameter
equal to false if we are not interested in including this feature in the box plots.
Alternatively to using boxplots, the Wilcoxon rank-sum test can be used to determine the significance
of the obtained results. To apply the Wilcoxon test to two distributions a and b, we use the R formula:
wilcox.test(a,b). The latex files produced when the ”Wilcox.R” scripts are executed contains two
types of tables: one per problem, and a global table summarizing the results. We include the tables of
the first type corresponding to the hypervolume indicator in Tables 4.6 to 4.11; Table ?? groups the
other tables into one. In each table, a N or a O symbol implies a p-value < 0.05, indicating than the
null hyphotesis (the two distribution have the same median) is rejected; otherwise, a − is used. The N
is used when the algorithm in the row obtained a better value than the algorithm in the column; the O
indicates de the opposite.
To illustrate the use of the Wilcoxon rank-sum test in other studies, we include a table that was used
in [28] in Table 4.13.
46 CHAPTER 4. EXPERIMENTATION WITH JMETAL

Table 4.7: ZDT2 .HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa N N N
NSGAIIb N N
NSGAIIc N

Table 4.8: ZDT3.HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa – N N
NSGAIIb N N
NSGAIIc N

Table 4.9: ZDT4 .HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa – N N
NSGAIIb – N
NSGAIIc –

Table 4.10: DTLZ1 .HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa – – N
NSGAIIb – N
NSGAIIc –

Table 4.11: WFG2 .HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa – – –
NSGAIIb – –
NSGAIIc –

Table 4.12: ZDT1 ZDT2 ZDT3 ZDT4 DTLZ1 WFG2 .HV.


NSGAIIb NSGAIIc NSGAIId
NSGAIIa N N – – – – N N N N – – N N N N N –
NSGAIIb N N N – – – N N N N N –
NSGAIIc N N N – – –

Table 4.13: LZ09 benchmark. Results of the Wilcoxon rank-sum test applied to the IHV values [28].
NSGA-IIr NSGA-IIa NSGA-IIde
NSGA-II O O O O O O O O O O O O O O O O O O O O N N N N N N O
NSGA-IIr − N N O − O N N N O N N N N N N N N
NSGA-IIa O N N N N N N N N
4.5. EXPERIMENTATION EXAMPLE: STANDARDSTUDY 47

4.5 Experimentation example: StandardStudy


In this section we describe another example of experiment. We have called it StandardStudy because it
represents a kind of study we carry out frequently: comparing a number of different metaheuristics over
the ZDT, DTLZ, and WFG benchmarks, making 100 independent runs.
The algorithmSettings() method (file: jmetal.experiments.studies.StandardStudy) is in-
cluded next:
1 /∗ ∗
2 ∗ C o n f i g u r e s t h e a l g o r i t h m s i n each i n d e p e n d e n t run
3 ∗ @param problemName The problem t o s o l v e
4 ∗ @param problemIndex
5 ∗ @throws ClassNotFoundException
6 ∗/
7 p u b l i c v o i d a l g o r i t h m S e t t i n g s ( S t r i n g problemName ,
8 i n t problemIndex ,
9 Algorithm [ ] a l g o r i t h m ) throws ClassNotFoundException {
10 try {
11 i n t numberOfAlgorithms = a l g o r i t h m N a m e L i s t . l e n g t h ;
12
13 HashMap [ ] p a r a m e t e r s = new HashMap [ numberOfAlgorithms ] ;
14
15 f o r ( i n t i = 0 ; i < numberOfAlgorithms ; i ++) {
16 p a r a m e t e r s [ i ] = new HashMap ( ) ;
17 } // f o r
18
19 if ( ! p a r e t o F r o n t F i l e [ problemIndex ] . e q u a l s ( ” ” ) ) {
20 f o r ( i n t i = 0 ; i < numberOfAlgorithms ; i ++)
21 p a r a m e t e r s [ i ] . put ( ” p a r e t o F r o n t F i l e ” , p a r e t o F r o n t F i l e [ problemIndex ] ) ;
22 } // i f
23
24 a l g o r i t h m [ 0 ] = new N S G A I I S e t t i n g s ( problemName ) . c o n f i g u r e ( p a r a m e t e r s [ 0 ] ) ;
25 a l g o r i t h m [ 1 ] = new SPEA2 Settings ( problemName ) . c o n f i g u r e ( p a r a m e t e r s [ 1 ] ) ;
26 a l g o r i t h m [ 2 ] = new M O C e l l S e t t i n g s ( problemName ) . c o n f i g u r e ( p a r a m e t e r s [ 2 ] ) ;
27 a l g o r i t h m [ 3 ] = new SMPSO Settings ( problemName ) . c o n f i g u r e ( p a r a m e t e r s [ 3 ] ) ;
28 a l g o r i t h m [ 4 ] = new GDE3 Settings ( problemName ) . c o n f i g u r e ( p a r a m e t e r s [ 4 ] ) ;
29 } c a t c h ( I l l e g a l A r g u m e n t E x c e p t i o n ex ) {
30 Logger . g e t L o g g e r ( StandardStudy . c l a s s . getName ( ) ) . l o g ( L e v e l . SEVERE, n u l l , ex ) ;
31 } c a t c h ( I l l e g a l A c c e s s E x c e p t i o n ex ) {
32 Logger . g e t L o g g e r ( StandardStudy . c l a s s . getName ( ) ) . l o g ( L e v e l . SEVERE, n u l l , ex ) ;
33 } catch ( JMException ex ) {
34 Logger . g e t L o g g e r ( StandardStudy . c l a s s . getName ( ) ) . l o g ( L e v e l . SEVERE, n u l l , ex ) ;
35 }
36 } // a l g o r i t h m S e t t i n g s

We can observe that this method is simpler than in the case of NSGAIIStudy, because we assume that
each algorithm is configured in its corresponding setting class. We test five metaheuristics: NSGAII,
SPEA2, MOCell, SMPSO, and GDE3 (lines 24-28).
The main method is included below, where we can observe the algorithm name list (lines 42-43), the
problem list (lines 44-48), and the list of the names of the files containing the Pareto fronts (lines 49-56):
37 ...
38 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws JMException , IOException {
39 StandardStudy exp = new StandardStudy ( ) ;
40
41 exp . experimentName = ” StandardStudy ” ;
42 exp . a l g o r i t h m N a m e L i s t = new S t r i n g [ ] {
43 ”NSGAII” , ”SPEA2” , ”MOCell” , ”SMPSO” , ”GDE3” } ;
44 exp . p r o b l e m L i s t = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”ZDT6” ,
45 ”WFG1” , ”WFG2” , ”WFG3” , ”WFG4” , ”WFG5” , ”WFG6” ,
46 ”WFG7” , ”WFG8” , ”WFG9” ,
47 ”DTLZ1” , ”DTLZ2” , ”DTLZ3” , ”DTLZ4” , ”DTLZ5” ,
48 ”DTLZ6” , ”DTLZ7” } ;
49 exp . p a r e t o F r o n t F i l e = new S t r i n g [ ] { ”ZDT1 . p f ” , ”ZDT2 . p f ” , ”ZDT3 . p f ” ,
48 CHAPTER 4. EXPERIMENTATION WITH JMETAL

50 ”ZDT4 . p f ” , ”ZDT6 . p f ” ,
51 ”WFG1. 2D. p f ” , ”WFG2. 2D. p f ” , ”WFG3. 2D. p f ” ,
52 ”WFG4. 2D. p f ” , ”WFG5. 2D. p f ” , ”WFG6. 2D. p f ” ,
53 ”WFG7. 2D. p f ” , ”WFG8. 2D. p f ” , ”WFG9. 2D. p f ” ,
54 ”DTLZ1 . 3D. p f ” , ”DTLZ2 . 3D. p f ” , ”DTLZ3 . 3D. p f ” ,
55 ”DTLZ4 . 3D. p f ” , ”DTLZ5 . 3D. p f ” , ”DTLZ6 . 3D. p f ” ,
56 ”DTLZ7 . 3D. p f ” } ;
57 ...

The rest of the code is similar to NSGAIIStudy: the list of indicators is included in line 59, the
directory to write the results and the one containing the Pareto fronts are specified next (lines 63-65),
the number of independent runs is indicated in line 69, the experiment is initialized in line 71, and the
method to run the algorithm is invoked (lines 74-75):
58 ...
59 exp . i n d i c a t o r L i s t = new S t r i n g [ ] { ”HV” , ”SPREAD” , ”EPSILON” } ;
60
61 i n t numberOfAlgorithms = exp . a l g o r i t h m N a m e L i s t . l e n g t h ;
62
63 exp . e x p e r i m e n t B a s e D i r e c t o r y= ” / U s e r s / a n t o n i o / Softw / p r u e b a s / j m e t a l / ” +
64 exp . experimentName ;
65 exp . p a r e t o F r o n t D i r e c t o r y = ” / U s e r s / a n t o n i o / Softw / p r u e b a s / data / p a r e t o F r o n t s ” ;
66
67 exp . a l g o r i t h m S e t t i n g s = new S e t t i n g s [ numberOfAlgorithms ] ;
68
69 exp . i n d e p e n d e n t R u n s = 100;
70
71 exp . i n i t E x p e r i m e n t ( ) ;
72
73 // Run t h e e x p e r i m e n t s
74 i n t numberOfThreads ;
75 exp . runExperiment ( numberOfThreads = 4 ) ;
76 ...

Finally, we generate the Latex tables and generate the R scripts. Note that we invoke three times
the methods generateRBoxplotsScript() and generateRWilcoxonScript(), one per problem family.
The reason is that otherwise the resulting graphs and tables would not fit into an A4 page:
77 ...
78 // G e n e r a t e l a t e x t a b l e s
79 exp . g e n e r a t e L a t e x T a b l e s ( ) ;
80
81 // C o n f i g u r e t h e R s c r i p t s t o be g e n e r a t e d
82 i n t rows ;
83 i n t columns ;
84 String prefix ;
85 S t r i n g [ ] problems ;
86 b o o l e a n notch ;
87
88 // C o n f i g u r i n g s c r i p t s f o r ZDT
89 rows = 3 ;
90 columns = 2 ;
91 p r e f i x = new S t r i n g ( ”ZDT” ) ;
92 p r o b l e m s = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”ZDT6” } ;
93
94 exp . g e n e r a t e R B o x p l o t S c r i p t s ( rows , columns , problems , p r e f i x , notch = f a l s e , exp ) ;
95 exp . g e n e r a t e R W i l c o x o n S c r i p t s ( problems , p r e f i x , exp ) ;
96
97 // C o n f i g u r e s c r i p t s f o r DTLZ
98 rows = 3 ;
99 columns = 3 ;
100 p r e f i x = new S t r i n g ( ”DTLZ” ) ;
101 p r o b l e m s = new S t r i n g [ ] { ”DTLZ1” , ”DTLZ2” , ”DTLZ3” , ”DTLZ4” , ”DTLZ5” ,
102 ”DTLZ6” , ”DTLZ7” } ;
103
4.6. EXPERIMENTS WHEN THE PARETO FRONTS OF THE PROBLEMS ARE UNKNOWN 49

104 exp . g e n e r a t e R B o x p l o t S c r i p t s ( rows , columns , problems , p r e f i x , notch=f a l s e , exp ) ;


105 exp . g e n e r a t e R W i l c o x o n S c r i p t s ( problems , p r e f i x , exp ) ;
106
107 // C o n f i g u r e s c r i p t s f o r WFG
108 rows = 3 ;
109 columns = 3 ;
110 p r e f i x = new S t r i n g ( ”WFG” ) ;
111 p r o b l e m s = new S t r i n g [ ] { ”WFG1” , ”WFG2” , ”WFG3” , ”WFG4” , ”WFG5” , ”WFG6” ,
112 ”WFG7” , ”WFG8” , ”WFG9” } ;
113
114 exp . g e n e r a t e R B o x p l o t S c r i p t s ( rows , columns , problems , p r e f i x , notch=f a l s e , exp ) ;
115 exp . g e n e r a t e R W i l c o x o n S c r i p t s ( problems , p r e f i x , exp ) ;
116
117 // Applying t h e Friedman t e s t
118 Friedman t e s t = new Friedman ( exp ) ;
119 t e s t . e x e c u t e T e s t ( ”EPSILON” ) ;
120 t e s t . e x e c u t e T e s t ( ”HV” ) ;
121 t e s t . e x e c u t e T e s t ( ”SPREAD” ) ;

4.6 Experiments when the Pareto fronts of the problems are


unknown
When solving real-world problems, the Pareto fronts of them usually are unknown, so applying the quality
indicators available in jMetal is not possible. An usual approach to cope with this issue is to construct
a reference front by collecting all the results of all the runs of all the algorithms (see Section 6.5). This
way, we have a way to compare the relative performance of the algorithms.
We provide an automatic way to obtain the reference fronts into an experiment. From the user point
of view, the approach is simple: just take an experiment class and leave the information of the Pareto
front files out. In the case of the StandardStudy class, detailed in the previous section, the changes to
do are indicated in the following piece of code:

37 ...
38 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) throws JMException , IOException {
39 StandardStudy exp = new StandardStudy ( ) ;
40
41 exp . experimentName = ” StandardStudy ” ;
42 exp . a l g o r i t h m N a m e L i s t = new S t r i n g [ ] {
43 ”NSGAII” , ”SPEA2” , ”MOCell” , ”SMPSO” , ”GDE3” } ;
44 exp . p r o b l e m L i s t = new S t r i n g [ ] { ”ZDT1” , ”ZDT2” , ”ZDT3” , ”ZDT4” , ”ZDT6” ,
45 ”WFG1” , ”WFG2” , ”WFG3” , ”WFG4” , ”WFG5” , ”WFG6” ,
46 ”WFG7” , ”WFG8” , ”WFG9” ,
47 ”DTLZ1” , ”DTLZ2” , ”DTLZ3” , ”DTLZ4” , ”DTLZ5” ,
48 ”DTLZ6” , ”DTLZ7” } ;
49 exp . p a r e t o F r o n t F i l e = new S t r i n g [ 1 8 ] ; // Space a l l o c a t i o n f o r 18 f r o n t s
50 ...
51 exp . p a r e t o F r o n t D i r e c t o r y = ” ” ; // This d i r e c t o r y must be empty
52 ...

We provide a StandardStudy2 class including these sentences.


When the experiment is executed, a new directory called referenceFront appears in the same place
of the R, latex, and data directories depicted in Figure 4.1. That directory contains files with a reference
front per problem. These front are used in the experiment to apply the quality indicators2 .

2 This is a contribution of Jorge Rodrı́guez


50 CHAPTER 4. EXPERIMENTATION WITH JMETAL

4.7 Using quality indicators


When any of the algorithms provided by jMetal are executed to solve a problem, two files containing
the approximations of the optimal solutions and the Pareto front are returned (by default, these files
are called VAR and FUN, respectively). Typically, the file FUN is used to apply some quality indicators
(hypervolume, spread, etc.) offline.
In jMetal 1.8, a new class jmetal.qualityIndicatorQualityIndicator was introduced. This class
is intended to facilitate the use of quality indicators inside the programs. We use mainly this class
in the algorithm main and algorithm Settings classes. For example, let us take a look to the
jmetal/metaheuristics/moead/MOEAD main.java file, which executes the algorithm MOEA/D-DE.
Here we can find the following lines of code:
• Importing the required class:
32 import jmetal.qualityIndicator.QualityIndicator;
• Declaring an object of the class:
57 QualityIndicator indicators ; // Object to get quality indicators
• The object is created using the third argument from the command line, which should contain the
file storing the Pareto from of the problem to solve:
72 indicators = new QualityIndicator(problem, args[1]) ;

• Using the indicators object:


117 if (indicators != null) {
118 logger_.info("Quality indicators") ;
119 logger_.info("Hypervolume: " + indicators.getHypervolume(population)) ;
120 logger_.info("GD : " + indicators.getGD(population)) ;
121 logger_.info("IGD : " + indicators.getIGD(population)) ;
122 logger_.info("Spread : " + indicators.getSpread(population)) ;
123 logger_.info("Epsilon : " + indicators.getEpsilon(population)) ;
124 } // if

As it can be seen, the QualityIndicator object is applied to the population returned by the algo-
rithm. This way, the program returns the values of the desired quality indicators of the obtained solution
set.
Another example of using QualityIndicator objects was introduced in Section 3.3.1, where the use
of the hypervolume inside NSGA-II to measure the convergence speed of the algorithm was detailed.

4.8 Running experiments in parallel


In jMetal 2.2 we introduced a first approximation to make use of current multi-core processors to speed-
up the execution of experiments, by using Java threads for that purpose. In jMetal 4.2 a new parallel
scheme has been adopted to enhance the performance of the running of the experiments.
We have tested this new feature by running the NSGAIIStudy experiment (see Section 4.4). The
computer is a MacBook with a Core i7 2.2GHz processor and 8 GB of RAM, running Mac OS X
10.7.5 (11G63); the Java version is ”Java(TM) SE Runtime Environment (build 1.6.0 37-b06-434)”. The
computing time using one thread is roughly 3.5 minutes, while using eight threads 2.4 minutes. The
speed-up is far from lineal, but we have to take into account that each run of the algorithms is around 0.5
seconds; it is expected that dealing with problems having more time-consuming functions, or algorithms
executing more than 25,000 function evaluations, the speed-up should be more noticeable.
Chapter 5

Parallel Algorithms

Since version 4.3, jMetal provides a basic support for developing parallel metaheuristics. This chap-
ter describes our first approximation to this issue, which is currently focused in allowing the parallel
evaluation of solutions taking advantage of the multicore feature of modern processors.

5.1 The IParallelEvaluator Interface


The taken approach to evaluate solutions in parallel is simple: the solutions to be evaluated are in-
serted into a list which is submitted to a parallel evaluator object that will make the parallel com-
putation. This kind of objects are represented by the IParallelEvaluator interface, located in the
jmetal.util.parallel package.
1 // IParallelEvaluator . java
2 ...
3 package j m e t a l . u t i l . p a r a l l e l ;
4
5 im po rt j a v a . u t i l . L i s t ;
6
7 im po rt j m e t a l . c o r e . Problem ;
8 im po rt j m e t a l . c o r e . S o l u t i o n ;
9
10 /∗ ∗
11 ∗ @author Antonio J . Nebro
12 ∗ Interface representing c l a s s e s for evaluating solutions in p a r a l l e l
13 ∗ The p r o c e d u r e i s :
14 ∗ 1− c r e a t e t h e p a r a l l e l e v a l u a t o r with s t a r t E v a l u a t o r ( )
15 ∗ 2− add s o l u t i o n s f o r b e i n g e v a l u a t e d with a d d S o l u t i o n f o r E v a l u a t i o n ( )
16 ∗ 3− e v a l u a t e t h e s o l u t i o n s with p a r a l l e l E v a l u a t i o n ( )
17 ∗ 4− shutdown t h e p a r a l l e l e v a l u a t o r with s t o p E v a l u a t o r ( )
18 ∗/
19
20 public interface IParallelEvaluator {
21 p u b l i c v o i d s t a r t E v a l u a t o r ( Problem problem ) ;
22 public void addSolutionForEvaluation ( Solution s o l u t i o n ) ;
23 p u b l i c L i s t <S o l u t i o n > p a r a l l e l E v a l u a t i o n ( ) ;
24 public void stopEvaluator ( ) ;
25 }

Listing 5.1: IParallelEvaluator interface

The code of IParallelEvaluator is included in Listing 5.1, and contains four methods:

• startEvaluator(): initializes and starts the parallel evaluator object.

51
52 CHAPTER 5. PARALLEL ALGORITHMS

• addSolutionForEvaluation(): sends a solution to the evaluator object. This solution will be


queued in an internal list.

• parallelEvaluation(): all the solutions in the internal list are evaluated in parallel, and a list
containing them is returned.

• stopEvaluator(): the parallel evaluator is stopped.

The IParallelEvaluator interface allows many possible implementations. In jMetal 4.3 we provide
the MultithreadedEvaluator class, which is designed to make use of the processors/cores which are
available on most computers nowadays. The constructor of this class is detailed in Listing 5.2. It takes
as argument an integer value indicating the desired number of threads to be used. If this argument takes
the value 0 then the number of processors of the system is used, according to the value returned by the
Java Runtime.getRuntime().availableProcessors() method.
1 // M u l t i t h r e a d e d E v a l u a t o r . j a v a
2 ...
3 package j m e t a l . u t i l . p a r a l l e l ;
4 ...
5 p u b l i c c l a s s M u l t i t h r e a d e d E v a l u a t o r implements I P a r a l l e l E v a l u a t o r {
6 ...
7 /∗ ∗
8 ∗ Constructor
9 ∗ @param t h r e a d s
10 ∗/
11 public MultithreadedEvaluator ( int threads ) {
12 numberOfThreads = t h r e a d s ;
13 i f ( t h r e a d s == 0 )
14 numberOfThreads = Runtime . getRuntime ( ) . a v a i l a b l e P r o c e s s o r s ( ) ;
15 e l s e i f ( threads < 0) {
16 C o n f i g u r a t i o n . l o g g e r . s e v e r e ( ” M u l t i t h r e a d e d E v a l u a t o r : t h e number o f t h r e a d s ” +
17 ” c an no t be n e g a t i v e number ” + t h r e a d s ) ;
18 }
19 else {
20 numberOfThreads = t h r e a d s ;
21 }
22 }
23
24 public v o i d s t a r t E v a l u a t o r ( Problem problem ) { . . .
25 public void addSolutionForEvaluation ( Solution s o l u t i o n ) { . . .
26 public L i s t <S o l u t i o n > p a r a l l e l E v a l u a t i o n ( ) { . . .
27 public void stopEvaluator ( ) { . . .
28 }

Listing 5.2: MultithreadedEvaluator class

5.2 Evaluating Solutions In Parallel in NSGA-II: pNSGAII


In this section, we illustrate the use of the IParallelEvaluator in pNSGAII, a version of NSGA-II
using this interface.
We start by showing how to instantiate the parallel evaluator, as is done in the pNSGAII main
class (see Listing 5.3, line 7). Then, the evaluator is passed as second argument to the pNSGAII class
constructor.
1 // pNSGAII main . j a v a
2 ...
3 p u b l i c c l a s s pNSGAII main {
4 ...
5 p u b l i c s t a t i c v o i d main ( S t r i n g [ ] args ) { . . .
6 ...
5.2. EVALUATING SOLUTIONS IN PARALLEL IN NSGA-II: PNSGAII 53

7 i n t t h r e a d s = 4 ; // 0 − u s e a l l t h e a v a i l a b l e c o r e s
8 I P a r a l l e l E v a l u a t o r p a r a l l e l E v a l u a t o r = new M u l t i t h r e a d e d E v a l u a t o r ( t h r e a d s ) ;
9
10 a l g o r i t h m = new pNSGAII ( problem , p a r a l l e l E v a l u a t o r ) ;
11 ...
12 }
13 }

Listing 5.3: pNSGAII main class

The pNSGAII class contains a state variable to reference the parallel evaluator, as shown in line 7 in
Listing 5.4. It is initialized in the class constructor (line 17).
1 // pNSGAII . j a v a
2 ...
3 package j m e t a l . m e t a h e u r i s t i c s . n s g a I I ;
4 ...
5 p u b l i c c l a s s pNSGAII e x t e n d s Algorithm {
6
7 IParallelEvaluator parallelEvaluator ;
8
9 /∗ ∗
10 ∗ Constructor
11 ∗ @param problem Problem t o s o l v e
12 ∗ @param e v a l u a t o r P a r a l l e l e v a l u a t o r
13 ∗/
14 p u b l i c pNSGAII ( Problem problem , I P a r a l l e l E v a l u a t o r e v a l u a t o r ) {
15 s u p e r ( problem ) ;
16
17 parallelEvaluator = evaluator ;
18 } // pNSGAII
19 ...
20 p u b l i c S o l u t i o n S e t e x e c u t e ( ) throws JMException , ClassNotFoundException {
21 ...

Listing 5.4: pNSGAII class. Constructor

The parallel evaluator is started in line 22 in the code included in Listing 5.5. The method startEvaluator()
takes as parameter the problem being solved, which is necessary for the further evaluation of the solu-
tions. The initial population is initialized in theee steps. First, in the loop starting in line 26, every
new instantiated solution (line 27) is sent to the evaluator (line 28); second, the parallelEvaluation()
method of the parallel evaluator is invoked (line 31); finally, the evaluated solutions are inserted into de
population (lines 32-35).
19 ...
20 p u b l i c S o l u t i o n S e t e x e c u t e ( ) throws JMException , ClassNotFoundException {
21 ...
22 p a r a l l e l E v a l u a t o r . s t a r t E v a l u a t o r ( problem ) ;
23 ...
24 // C r e a t e t h e i n i t i a l s o l u t i o n S e t
25 Solution newSolution ;
26 f o r ( i n t i = 0 ; i < p o p u l a t i o n S i z e ; i ++) {
27 n e w S o l u t i o n = new S o l u t i o n ( p r o b l e m ) ;
28 p a r a l l e l E v a l u a t o r . addSolutionForEvaluation ( newSolution ) ;
29 }
30
31 L i s t <S o l u t i o n > s o l u t i o n L i s t = p a r a l l e l E v a l u a t o r . p a r a l l e l E v a l u a t i o n ( ) ;
32 for ( Solution solution : solutionList ) {
33 p o p u l a t i o n . add ( s o l u t i o n ) ;
34 e v a l u a t i o n s ++ ;
35 }
36 ...

Listing 5.5: pNSGAII class. Initializing initial population


54 CHAPTER 5. PARALLEL ALGORITHMS

Table 5.1: Solving ZDT1 with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds).

NSGAII 1T 8T 32T 128T 512T


ZDT1 670 730 750 770 900 950

The same scheme is applied to evaluate in parallel the solutions created after applying the crossover
and mutation operators, as it can be observed in the piece of code included in Listing 5.6.
36 ...
37 // G e n e r a t i o n s
38 w h i l e ( e v a l u a t i o n s < maxEvaluations ) {
39 // C r e a t e t h e o f f S p r i n g s o l u t i o n S e t
40 o f f s p r i n g P o p u l a t i o n = new S o l u t i o n S e t ( p o p u l a t i o n S i z e ) ;
41 S o l u t i o n [ ] p a r e n t s = new S o l u t i o n [ 2 ] ;
42 f o r ( i n t i = 0 ; i < ( p o p u l a t i o n S i z e / 2 ) ; i ++) {
43 i f ( e v a l u a t i o n s < maxEvaluations ) {
44 // o b t a i n p a r e n t s
45 parents [ 0 ] = ( Solution ) selectionOperator . execute ( population ) ;
46 parents [ 1 ] = ( Solution ) selectionOperator . execute ( population ) ;
47 Solution [ ] offSpring = ( Solution [ ] ) crossoverOperator . execute ( parents ) ;
48 mutationOperator . execute ( o f f S p r i n g [ 0 ] ) ;
49 mutationOperator . execute ( o f f S p r i n g [ 1 ] ) ;
50 p a r a l l e l E v a l u a t o r . addSolutionForEvaluation ( offSpring [ 0 ] ) ;
51 p a r a l l e l E v a l u a t o r . addSolutionForEvaluation ( offSpring [ 1 ] ) ;
52 } // i f
53 } // f o r
54
55 L i s t <S o l u t i o n > s o l u t i o n s = p a r a l l e l E v a l u a t o r . p a r a l l e l E v a l u a t i o n ( ) ;
56
57 for ( Solution solution : solutions ) {
58 o f f s p r i n g P o p u l a t i o n . add ( s o l u t i o n ) ;
59 e v a l u a t i o n s ++;
60 }
61 ...

Listing 5.6: pNSGAII class. Evaluating solutions in parallel in the main loop of NSGA-II

Besides pNSGAII, a parallel version of the SMPSO algorithm, named pSMPSO (included in the
jmetal.metaheuristics.smpso package), is provided in jMetal 4.3.

5.3 About Parallel Performance


In this section, we include some performance figures to give an idea of the benefits of using the multi-
threaded parallel evaluator. The tests are executed in a MacBook Pro with a 2.2 GHz Intel Core i7 and
8 GB 13333 MHz DDR3; the operating system is Mac OS X Lion 10.7.5 (11G63) and the Java version
is 1.6.0 37.
To determine the overhead of using the parallel evaluator, we execute NSGA-II to solve the ZDT1
problem using standard settings (25,000 function evaluations) and pNSGAII with 1, 8, 32, 128, and
256 threads. The reported times in Table 5.1 are the rough mean of a few independent runs. We
can observe that the overhead of using 1 thread in pNSGAII versus NSGA-II is about 70 ms, which
can be considered as very low taking into account that we are solving a benchmark problem. As the
i7 processor has four cores and each one incorporates two hyperthreads, the total number of cores
reported by the Java Runtime.getRuntime().availableProcessors() method is 8. As a consequence,
the performance should degrade when using more than 8 threads, which is corroborated by the times
reported in Table 5.1, although the penalty when using up to 512 threads is about 300 ms and 200 ms
compared to the sequential NSGA-II and pNSGA-II with 8 threads, respectively.
5.3. ABOUT PARALLEL PERFORMANCE 55

Table 5.2: Solving ZDT1b with NSGA-II and pNSGAII with 1, 8, 32, 128, and 512 threads (times in
milliseconds). ZDT1b is a the problem as ZDT1 but including a idle loop in the evaluation function to
increase its computing time.

NSGAII 1T 8T 32T 128T 512T


ZDT1b 86,500 87,000 24,000 24,000 24,000 24,000

Next we increase computing time to evaluate solutions to ZDT1 by adding the next loop into the
evaluation function:
1 f o r ( l o n g i = 0 ; i < 1 0 0 0 0 0 0 0 ; i ++) ;

We called this problem ZDT1b, and the computing times of the NSGA-II variants are included in
Table 5.2. Now, NSGA-II requires 86.5 seconds to perform 25,000 evaluations, while the multithreaded
versions from 8 threads take roughly 24 seconds, which means a speed-up of 3.6 (i.e., an efficiency of
0.45). The reason to not achieving a higher speed-up is that we are evaluating the individuals in parallel,
but the ranking and crowding procedures are carried out sequentially.
56 CHAPTER 5. PARALLEL ALGORITHMS
Chapter 6

How-to’s

This chapter is devoted to containing answers to some questions that may emerge when working with
jMetal.

6.1 How to use binary representations in jMetal


All the examples we have presented in the manual are related to optimizing continuous problems using
a real representation of the solutions. In this section we include an example of using a binary coding
representation. To illustrate this, we use the jmetal.experiments.settings.NSGAIIBinary Settigns
class.
Let us start by commenting the piece of code below. The code is very simple, and we can observe
how the BinaryReal encoding is selected (line 22), so the problem is configured to use binary coded real
solutions (line 24). The mutation probability is set as 1/L (line 34), where L is the number of bits of
the solution.
1 // N S G A I I B i n a r y S e t t i n g s . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s . s e t t i n g s ;
4 ...
5 /∗ ∗
6 ∗ S e t t i n g s c l a s s o f a l g o r i t h m NSGA−I I ( b i n a r y e n c o d i n g )
7 ∗/
8 p u b l i c c l a s s NSGAIIBinary Settings extends S e t t i n g s {
9
10 int populationSize ;
11 i n t maxEvaluations ;
12
13 double mutationProbability ;
14 double c r o s s o v e r P r o b a b i l i t y ;
15
16 /∗ ∗
17 ∗ Constructor
18 ∗/
19 p u b l i c N S G A I I B i n a r y S e t t i n g s ( S t r i n g problem ) {
20 s u p e r ( problem ) ;
21
22 O b j e c t [ ] problemParams = { ” B i n a r y R e a l ” } ;
23 try {
24 p r o b l e m = ( new ProblemFactory ( ) ) . getProblem ( problemName , problemParams ) ;
25 } c a t c h ( JMException e ) {
26 // TODO Auto−g e n e r a t e d c a t c h b l o c k
27 e . printStackTrace () ;
28 }
29

57
58 CHAPTER 6. HOW-TO’S

30 // D e f a u l t s e t t i n g s
31 p o p u l a t i o n S i z e = 100 ;
32 m a x E v a l u a t i o n s = 25000 ;
33
34 mutationProbability = 1 . 0 / p r o b l e m . getNumberOfBits ( ) ;
35 crossoverProbability = 0.9 ;
36 } // N S G A I I B i n a r y S e t t i n g s
37 ...

In the configure() method, we choose the single point and bit-flip crossover and mutation operators,
respectively (lines 62 -68):

38 /∗ ∗
39 ∗ C o n f i g u r e NSGAII with u s e r −d e f i n e d p a r a m e t e r s e t t i n g s
40 ∗ @return A NSGAII a l g o r i t h m o b j e c t
41 ∗ @throws j m e t a l . u t i l . JMException
42 ∗/
43 p u b l i c Algorithm c o n f i g u r e ( ) throws JMException {
44 Algorithm a l g o r i t h m ;
45 Operator selection ;
46 Operator crossover ;
47 O p e r a t o r mutation ;
48
49 QualityIndicator indicators ;
50
51 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
52
53 // C r e a t i n g t h e problem
54 a l g o r i t h m = new NSGAII ( p r o b l e m ) ;
55
56 // Algorithm p a r a m e t e r s
57 algorithm . setInputParameter ( ” populationSize ” , populationSize ) ;
58 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , m a x E v a l u a t i o n s ) ;
59
60
61 // Mutation and C r o s s o v e r Binary c o d i f i c a t i o n
62 p a r a m e t e r s = new HashMap ( ) ;
63 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 0 . 9 ) ;
64 c r o s s o v e r = CrossoverFactory . getCrossoverOperator ( ” SinglePointCrossover ” , parameters
);
65
66 p a r a m e t e r s = new HashMap ( ) ;
67 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , 1 . 0 / p r o b l e m . getNumberOfBits ( ) ) ;
68 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” B i t F l i p M u t a t i o n ” , p a r a m e t e r s ) ;
69
70 // S e l e c t i o n O p e r a t o r
71 parameters = n u l l ;
72 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
73
74 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
75 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
76 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
77 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
78
79 return algorithm ;
80 } // c o n f i g u r e

If we want to use NSGA-II to solve the ZDT1 problem by using a binary coded real representation
we simply need to execute this command: java jmetal.experiments.Main NSGAIIBinary ZDT4. If
the problem only allows a binary encoding (e.g., the ZDT5 problem), then the line 22 must be modified
as follows:
Object [] problemParams = "Binary".
6.2. HOW TO USE PERMUTATION REPRESENTATIONS IN JMETAL 59

6.2 How to use permutation representations in jMetal


Using a permutation encoding is very similar to using a binary representation. We provide an example
in the jmetal.experiments.settings.NSGAIIPermutation Settigns class:
1 // N S G A I I P e r m u t a t i o n S e t t i n g s . j a v a
2 ...
3 package j m e t a l . e x p e r i m e n t s . s e t t i n g s ;
4 ...
5 /∗ ∗
6 ∗ S e t t i n g s c l a s s o f a l g o r i t h m NSGA−I I ( p e r m u t a t i o n e n c o d i n g )
7 ∗/
8 p u b l i c c l a s s NSGAIIPermutation Settings extends S e t t i n g s {
9
10 int populationSize ;
11 i n t maxEvaluations ;
12
13 double mutationProbability ;
14 double c r o s s o v e r P r o b a b i l i t y ;
15
16 /∗ ∗
17 ∗ Constructor
18 ∗/
19 p u b l i c N S G A I I B i n a r y S e t t i n g s ( S t r i n g problem ) {
20 s u p e r ( problem ) ;
21
22 O b j e c t [ ] problemParams = { ” Permutation ” } ;
23 try {
24 p r o b l e m = ( new ProblemFactory ( ) ) . getProblem ( problemName , problemParams ) ;
25 } c a t c h ( JMException e ) {
26 // TODO Auto−g e n e r a t e d c a t c h b l o c k
27 e . printStackTrace () ;
28 }
29
30 // D e f a u l t s e t t i n g s
31 p o p u l a t i o n S i z e = 100 ;
32 m a x E v a l u a t i o n s = 25000 ;
33
34 mutationProbability = 1 . 0 / p r o b l e m . getNumberOfBits ( ) ;
35 crossoverProbability = 0.9 ;
36 } // N S G A I I P e r m u t a t i o n S e t t i n g s
37 ...

In the configure() method, we choose the PMX and bit-flip crossover and mutation operators,
respectively (lines 62 -68):
38 /∗ ∗
39 ∗ C o n f i g u r e NSGAII with u s e r −d e f i n e d p a r a m e t e r s e t t i n g s
40 ∗ @return A NSGAII a l g o r i t h m o b j e c t
41 ∗ @throws j m e t a l . u t i l . JMException
42 ∗/
43 p u b l i c Algorithm c o n f i g u r e ( ) throws JMException {
44 Algorithm a l g o r i t h m ;
45 Operator selection ;
46 Operator crossover ;
47 O p e r a t o r mutation ;
48
49 QualityIndicator indicators ;
50
51 HashMap p a r a m e t e r s ; // O p e r a t o r p a r a m e t e r s
52
53 // C r e a t i n g t h e problem
54 a l g o r i t h m = new NSGAII ( p r o b l e m ) ;
55
56 // Algorithm p a r a m e t e r s
60 CHAPTER 6. HOW-TO’S

57 algorithm . setInputParameter ( ” populationSize ” , populationSize ) ;


58 a l g o r i t h m . s e t I n p u t P a r a m e t e r ( ” maxEvaluations ” , m a x E v a l u a t i o n s ) ;
59
60
61 // Mutation and C r o s s o v e r Permutation c o d i f i c a t i o n
62 p a r a m e t e r s = new HashMap ( ) ;
63 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , c r o s s o v e r P r o b a b i l i t y ) ;
64 c r o s s o v e r = C r o s s o v e r F a c t o r y . g e t C r o s s o v e r O p e r a t o r ( ” PMXCrossover ” , p a r a m e t e r s ) ;
65
66 p a r a m e t e r s = new HashMap ( ) ;
67 p a r a m e t e r s . put ( ” p r o b a b i l i t y ” , m u t a t i o n P r o b a b i l i t y ) ;
68 mutation = M u t a t i o n F a c t o r y . g e t M u t a t i o n O p e r a t o r ( ” SwapMutation ” , p a r a m e t e r s ) ;
69 // S e l e c t i o n O p e r a t o r
70 parameters = n u l l ;
71 s e l e c t i o n = S e l e c t i o n F a c t o r y . g e t S e l e c t i o n O p e r a t o r ( ” BinaryTournament2 ” , p a r a m e t e r s ) ;
72
73 // Add t h e o p e r a t o r s t o t h e a l g o r i t h m
74 a l g o r i t h m . addOperator ( ” c r o s s o v e r ” , c r o s s o v e r ) ;
75 a l g o r i t h m . addOperator ( ” mutation ” , mutation ) ;
76 a l g o r i t h m . addOperator ( ” s e l e c t i o n ” , s e l e c t i o n ) ;
77
78 return algorithm ;
79 } // c o n f i g u r e

6.3 How to use the Mersenne Twister pseudorandom number


generator?
The default pseudorandom number generator can be changed by the Mersenne Twister algorithm1 .
The way to do it is quite simple. The jmetal.util.PseudoRandom class has a method called
setRandomGenerator(), so to use the Mersenne Twister just add the following sentence at the be-
ginning of the execution of an algorithm:
1 j m e t a l . u t i l . PseudoRandom . setRandomGenerator ( new M e r s e n n e T w i s t e r F a s t ( ) ) ;

6.4 How to create a new solution type having mixed variables?


jMetal provides many built-in solutions types (RealSolutionType, BinarySolutionType, IntSolutionType,
PermutationSolutionType, etc.). A solution type specifies the types of the variables a given solution
can have, thus defining the solution encoding. In general, the variables of most of the provided solution
types are of the same class; thus, a solution type RealSolutionType refers to solutions composed of real
variables, the IntSolutionType incorporates integer variables, and so on. However, defining solutions
types having different variable types is very simple in jMetal; in fact, it is not different from defining
one containing variables of the same type.
In jMetal 4.0 we provide two solution types having mixed variables IntRealSolutionType and
ArrayRealAndBinarySolutionType; the first one contains integer and real variables, and the sec-
ond one represents solutions having an array of real values plus a binary string. The code of the
IntRealSolutionType is included in Listing 6.1.
1 // I n t R e a l S o l u t i o n T y p e . j a v a
2 ...
3 package j m e t a l . e n c o d i n g s . s o l u t i o n T y p e ;
4 ...
5 /∗ ∗
6 ∗ C l a s s r e p r e s e n t i n g a s o l u t i o n t y p e i n c l u d i n g two v a r i a b l e s : an i n t e g e r
7 ∗ and a r e a l .
1 This is a contribution of Jean-Laurent Hippolyte
6.4. HOW TO CREATE A NEW SOLUTION TYPE HAVING MIXED VARIABLES? 61

8 ∗/
9 p ub l ic c l a s s IntRealSolutionType extends SolutionType {
10 private int intVariables ;
11 private int realVariables ;
12
13 /∗ ∗
14 ∗ Constructor
15 ∗/
16 p u b l i c I n t R e a l S o l u t i o n T y p e ( Problem problem , i n t i n t V a r i a b l e s , i n t r e a l V a r i a b l e s )
throws ClassNotFoundException {
17 s u p e r ( problem ) ;
18 intVariables = intVariables ;
19 realVariables = realVariables ;
20 } // C o n s t r u c t o r
21
22 /∗ ∗
23 ∗ Creates the v a r i a b l e s of the s o l u t i o n
24 ∗ @param d e c i s i o n V a r i a b l e s
25 ∗ @throws ClassNotFoundException
26 ∗/
27 p u b l i c V a r i a b l e [ ] c r e a t e V a r i a b l e s ( ) throws ClassNotFoundException {
28 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ p r o b l e m . ge tN um b er Of Va ri ab le s ( ) ] ;
29
30 f o r ( i n t v a r = 0 ; v a r < i n t V a r i a b l e s ; v a r++)
31 v a r i a b l e s [ v a r ] = new I n t ( ( i n t ) p r o b l e m . g e t L o w e r L i m i t ( v a r ) , ( i n t ) p r o b l e m .
getUpperLimit ( var ) ) ;
32
33 f o r ( i n t v a r = i n t V a r i a b l e s ; v a r < ( i n t V a r i a b l e s + r e a l V a r i a b l e s ) ; v a r++)
34 v a r i a b l e s [ v a r ] = new Real ( p r o b l e m . g e t L o w e r L i m i t ( v a r ) , p r o b l e m . g e t U p p e r L i m i t (
var ) ) ;
35
36 return variables ;
37 } // c r e a t e V a r i a b l e s
38 } // I n t R e a l S o l u t i o n T y p e

Listing 6.1: IntRealSolutionType class

In jMetal 4.0 we provide two solution types having mixed variables IntRealSolutionType and
ArrayRealAndBinarySolutionType; the first one contains integer and real variables, and the sec-
ond one represents solutions having an array of real values plus a binary string. The code of the
IntRealSolutionType is included in Listing 6.1. We can observe that the number of integers and real
variables is indicated in the class constructor (lines 16-20), and when the createVariables() method is
called, the required variables are created. The ArrayRealAndBinarySolutionType class is even simpler
(see 6.2).
1 // ArrayRealAndBinarySolutionType . j a v a
2 ...
3 package j m e t a l . e n c o d i n g s . s o l u t i o n T y p e ;
4 ...
5 /∗ ∗
6 ∗ C l a s s r e p r e s e n t i n g t h e s o l u t i o n t y p e o f s o l u t i o n s composed o f a r r a y o f r e a l s
7 ∗ and a b i n a r y s t r i n g .
8 ∗ ASSUMPTIONs :
9 ∗ − The n u m b e r O f V a r i a b l e s f i e l d i n c l a s s Problem must c o n t a i n t h e number
10 ∗ o f r e a l v a r i a b l e s . This f i e l d i s used t o a p p l y r e a l o p e r a t o r s ( e . g . ,
11 ∗ mutation p r o b a b i l i t y )
12 ∗ − The u p p e r L i m i t and l o w e r L i m i t a r r a y s must have t h e l e n g t h i n d i c a t e d
13 ∗ by n u m b e r O f V a r i a b l e s .
14 ∗/
15 p u b l i c c l a s s ArrayRealAndBinarySolutionType e x t e n d s S o l u t i o n T y p e {
16 private int binaryStringLength ;
17 p r i v a t e i n t numberOfRealVariables ;
18 /∗ ∗
19 ∗ Constructor
62 CHAPTER 6. HOW-TO’S

20 ∗ @param problem
21 ∗ @param r e a l V a r i a b l e s Number o f r e a l v a r i a b l e s
22 ∗ @param b i n a r y S t r i n g L e n g t h Length o f t h e b i n a r y s t r i n g
23 ∗ @throws ClassNotFoundException
24 ∗/
25 p u b l i c ArrayRealAndBinarySolutionType ( Problem problem ,
26 int realVariables ,
27 int binaryStringLength )
28 throws ClassNotFoundException {
29 s u p e r ( problem ) ;
30 binaryStringLength = binaryStringLength ;
31 numberOfRealVariables = r e a l V a r i a b l e s ;
32 } // C o n s t r u c t o r
33
34 /∗ ∗
35 ∗ Creates the v a r i a b l e s of the s o l u t i o n
36 ∗ @param d e c i s i o n V a r i a b l e s
37 ∗ @throws ClassNotFoundException
38 ∗/
39 p u b l i c V a r i a b l e [ ] c r e a t e V a r i a b l e s ( ) throws ClassNotFoundException {
40 V a r i a b l e [ ] v a r i a b l e s = new V a r i a b l e [ 2 ] ;
41
42 v a r i a b l e s [ 0 ] = new ArrayReal ( n u m b e r O f R e a l V a r i a b l e s , p r o b l e m ) ;
43 v a r i a b l e s [ 1 ] = new Binary ( b i n a r y S t r i n g L e n g t h ) ;
44 return variables ;
45 } // c r e a t e V a r i a b l e s
46 } // ArrayRealAndBinarySolutionType

Listing 6.2: listing:ArrayRealAndBinarySolutionType class

As any other solution type, the key point is that we can define operators to be applied to them. As
we observed in the description of the SBX crossover (see Listing 3.5), we can specify in the operator the
list of valid types which it can be applied to. in jMetal 4.0 we supply two operators to this solution type:
• SBXSinglePointCrossover: applies a SBX crossover to the real variables and a single point
crossover to the binary part.
• PolynomialBitFlipMutation: the real part of the solution is mutated with a polynomial mutation
and a bit flip is applied to the binary string.
If we take a look to the implementation of these two operator we can observe that they do not differ
from any other operator, as the SBX crossover detailed in Section 3.1.2.
The solution types and operators cited in this section can be used as templates to develop your own
solution types and associated operators if they are not available in jMetal. If you do so and think that
your new classes can be useful to other researchers, please feel free to contact us to include them in
jMetal.

6.5 How to obtain the non-dominated solutions from a file?


Sometimes we face the problem of having a file containing both dominated and non-dominated solutions
and we need to get only the non-dominated ones. To achieve this, we provide the jmetal.util.ExtractParetoFront
utility.
To illustrate the use of this tool, let us suppose that we made 100 independent runs of an algorithm
to solve the a given problem. If the result Pareto front approximations are stored in files named FUN.0,
FUN.1, ..., FUN.99, we can obtain a file containing all the found solutions, if we are working a UNIX
machine, with this sentence:
1 % f o r i i n FUN. ∗ ; do c a t $ i >> f r o n t ; done

If we run the utility without any parameters we get the following messages:
6.6. HOW TO USE THE WFG HYPERVOLUME ALGORITHM 63

1 % java jmetal . u t i l . ExtractParetoFront


2 Wrong number o f arguments :
3 S i n t a x t : j a v a E x t r a c t P a r e t o F r o n t < f i l e > <d i m e n s i o n s >
4 <f i l e > i s a f i l e containing points
5 <d i m e n s i o n s > r e p r e s e n t s t h e number o f d i m e n s i o n s o f t h e problem

Thus, to select the non-dominated solutions from the previously file we have to execute the utility
as follows (we assume that the number of objectives of the problem is 2):
1 % java jmetal . u t i l . ExtractParetoFront f r o n t 2

As a result, the program generates a file named ”front.pf”.


This utility is useful when we run a number of algorithms to solve a problem whose Pareto front is
unknown. This way, a reference Pareto front can be easily by joining all the results of all the executions
and then selecting the non-dominated solutions.

6.6 How to use the WFG Hypervolume algorithm


A current trend in multi-objective optimization is the design of indicator-based algorithms, i.e., tech-
niques including some kind of quality indicator to guide the search somehow. A typical example is the
SMSEMOA algorithm [12], which makes use of the Hypevolume in the selection step. The drawback
of this approach is that computing the Hypervolume can be very expensive as its computation grows
exponentially with the number of objectives of the problem being solved. We proposed a version of the
SMPSO algorithm [25] based in this idea in [24]; in particular, we replaced the crowding distance based
archive by another one using the Hypevolume contribution as density estimator. The resulting SM-
SPOhv algorithm outperformed de original SMPSO in a benchmark composed of bi- and three-objective
problems at the cost of a considerable higher computing time: solving the ZDT1 problem with SMPSO
using standard settings takes less than a second while SMPSOhv requires about one minute. Beyond
three objectives, the computing times of SMPSOhv grows to become unacceptable.
To cope with the high cost of computing the Hypervolume we have included a new package called
jmetal.qualityIndicator.fastHypervolume which contains an implementation of the Hypervolume
algorithm introduced by the Working Fish Group (WFG) in [38]. and a FastHypervolume class making
use of the WFG solution. The main features of the FastHypervolume class are:

• No normalization is performed to accelerate the Hypervolume calculus. This must be taken into
account if this implementation is going to be used instead of Zitzler’s Hypervolume contained in
the jmetal.qualityIndicator.Hypervolume class.

• The bi-objective has been optimized when calculating the Hypervolume contribution of the solu-
tions in a SolutionSet. To give an idea of the performance improvements, when solving the ZDT1
problem with SMS-EMOA in our development laptop the times are reduced from 47s to 3.6s.

• When the number of objectives is higher than two then the WFG algorithm is used. We have to
note that although the WFG Hypervolume is considered to be very fast beyond 5 objectives, it
is still very slow to be used in algorithms such as SMPSOhv when trying to solve many-objective
optimization problems.

Additionally, a FastHypervolumeArchive class is included, which can be used as an external archive


in many multi-objective metaheuristics, as in SMPSOhv.
The WFG Hypervolume algorithm can be invoked from the command line, as the following examples
show:

$ java jmetal.qualityIndicator.fastHypervolume.wfg.WFGHV front 1.0 1.0


Using reference point: 1.0 1.0
hv = 0.6620693871677154
64 CHAPTER 6. HOW-TO’S

$ java jmetal.qualityIndicator.fastHypervolume.wfg.WFGHV front 3.0 3.0


Using reference point: 3.0 3.0
hv = 8.662062933772107

6.7 How to configure the algorithms from a configuration file?


The traditional way of working with jMetal algorithms is to set the parameter values in the main or
Settings classes (e.g., NSGAII Main or NSGAII Settings in the case of NSGAII) and then recompiling
the source codes. The reason is that when we work with jMetal, compiling the codes requires less than a
second, so we found problem in this way of working. However, some users have reported to us that they
integrate jMetal in other tools, and the need to recompile whenever a metaheuristic has to be configured
is cumbersome.
Thanks to a contribution of Francisco Luna, it is possible to configure the algorithm settings by using
Java properties files since jMetal 4.5. A property file is a text file containing pairs key=value, so the
idea is to provide a file of this kind to indicate the settings of a metaheuristic parameters. We impose
the requirement that if the algorithm name is Foo then the corresponding parameter settings file must
be named Foo.conf. In the case of NSGA-II, the NSGAII.conf file content with the default values is:

#NSGAII.conf file
populationSize=100
maxEvaluations=25000
#mutationProbability=.6
crossoverProbability=0.9
mutationDistributionIndex=20.0
crossoverDistributionIndex=20.0

The parameters than can be set are those indicated in the corresponding Foo Settings class (e.g.,
NSGAII Settings). There is no need to indicate the values of all the parameters; if some of them are
missing in the file or the line containing them starts by # the default value in the Settings class will be
used.
To read this file, the way of calling a metaheuristic is through the jmetal.experiments.MainC class.
After putting the configuration file in the working directory, the MainC class can be invoked as follows:

java jmetal.experiments.MainC NSGAII


Chapter 7

What about’s

This chapter contains answers to some questions which have not been dealt with before in the manual.

7.1 What about developing single-objective metaheuristics with


jMetal?
As jMetal is intended to MO optimization, it is clear that to solve SOP problems you could define
problems having one objective function and use some of the available MO metaheuristics. However, this
could not be the best choice; there are many frameworks available for SO optimization (e.g., EO, Open
Beagle, JavaEva, etc.), so you might consider them first before jMetal.
Anyway, developing a SO version of a MO metaheuristic usually is not difficult. We offer the following
evolutionary algorithms:
• Variants of genetic algorithms (GAs)
– gGA: generational genetic algorithm (GA).
– ssGA: steady-state GA.
– scGA: synchronous cellular GA (cGA).
– acGA: asynchronous cGA.
– pgGA: parallel (multi-threaded) GA.
• CMA-ES: covariance matrix adaptation evolution strategy.
• DE: differential evolution.
• PSO: particle swarm optimization.

7.2 What about optimized variables and solution types?


When we deal with problems having a few number of variables, the general scheme of creating solu-
tions is reasonably efficient. However, if we have a problem having hundreds or thousands of decision
variables, the scheme is inefficient in terms of storage and computing time. For example, if the num-
ber of decision variables of the problem is 1000, each solution will contains 1000 Java objects of class
jmetal.base.variable.Real, one per each Variable object, each one storing its proper lower and
upper bound values. This wastes memory, but it also implies that manipulating solutions (e.g., creating
and copying them) is also computationally expensive.
To cope with this issue, we have defined what we have called ”optimization types”. The idea is
simple: instead of using solutions with an array of N Real objects, we will use solutions with an array

65
66 CHAPTER 7. WHAT ABOUT’S

of N real values. In jMetal 3.0 we incorporated two optimization types based on this idea: ArrayReal
and ArrayInt.
Using optimization types brings some difficulties that have to be solved. Thus, we have now the of
using a set of N decision variables, or one decision variable composed of an array of N values, which
affects the way variable types are initialized and used. We have solved these problems by using wrapper
objects, which are included in jmetal.util.wrapper; in particular, we will show next how to use the
XReal wrapper.
Let us start by showing the class implementing the Schaffer problem:
1 // S c h a f f e r . j a v a
2 ...
3 package j m e t a l . p r o b l e m s ;
4 ...
5 /∗ ∗
6 ∗ C l a s s r e p r e s e n t i n g problem S c h a f f e r
7 ∗/
8 p u b l i c c l a s s S c h a f f e r e x t e n d s Problem {
9
10 /∗ ∗
11 ∗ Constructor .
12 ∗ C r e a t e s a d e f a u l t i n s t a n c e o f problem S c h a f f e r
13 ∗ @param s o l u t i o n T y p e The s o l u t i o n t y p e must ” Real ” o r ” B i n a r y R e a l ” .
14 ∗/
15 p u b l i c S c h a f f e r ( S t r i n g s o l u t i o n T y p e ) throws ClassNotFoundException {
16 numberOfVariables = 1;
17 numberOfObjectives = 2;
18 numberOfConstraints = 0 ;
19 problemName = ” Schaffer ” ;
20
21 l o w e r L i m i t = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
22 u p p e r L i m i t = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
23 l o w e r L i m i t [ 0 ] = −100000;
24 upperLimit [ 0 ] = 100000;
25
26 i f ( s o l u t i o n T y p e . compareTo ( ” B i n a r y R e a l ” ) == 0 )
27 s o l u t i o n T y p e = new B i n a r y R e a l S o l u t i o n T y p e ( t h i s ) ;
28 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” Real ” ) == 0 )
29 s o l u t i o n T y p e = new R e a l S o l u t i o n T y p e ( t h i s ) ;
30 else {
31 System . out . p r i n t l n ( ” E r r o r : s o l u t i o n t y p e ” + s o l u t i o n T y p e + ” i n v a l i d ” ) ;
32 System . e x i t ( −1) ;
33 }
34 } // S c h a f f e r
35
36
37 /∗ ∗
38 ∗ Evaluates a s o l u t i o n
39 ∗ @param s o l u t i o n The s o l u t i o n t o e v a l u a t e
40 ∗ @throws JMException
41 ∗/
42 p u b l i c v o i d e v a l u a t e ( S o l u t i o n s o l u t i o n ) throws JMException {
43 Variable [ ] variable = solution . getDecisionVariables () ;
44
45 d o u b l e [ ] f = new d o u b l e [ n u m b e r O f O b j e c t i v e s ] ;
46 f [ 0 ] = v a r i a b l e [ 0 ] . getValue ( ) ∗ v a r i a b l e [ 0 ] . getValue ( ) ;
47
48 f [ 1 ] = ( v a r i a b l e [ 0 ] . getValue ( ) − 2 . 0 ) ∗
49 ( v a r i a b l e [ 0 ] . getValue ( ) − 2 . 0 ) ;
50
51 solution . setObjective (0 , f [ 0 ] ) ;
52 solution . setObjective (1 , f [ 1 ] ) ;
53 } // e v a l u a t e
54 } // S c h a f f e r
7.2. WHAT ABOUT OPTIMIZED VARIABLES AND SOLUTION TYPES? 67

The class constructor contains at the end a group of sentences indicating the allowd solution types that
can be used to solve the problem (BinaryRealSolutionType and RealSolutionType. The evaluate()
method directly accesses the variables to evaluate the solutions. Schaffer’s problem is an example of
problem that do not need to use optimized types, given that it has only a variable.
Let us consider now problems which can have many variables: some examples are the ZDT, DTLZ,
WFG benchmark problems, and Kursawe’s problem. We use this last one as an example. Its constructor
is included next:
1 p u b l i c Kursawe ( S t r i n g s o l u t i o n T y p e , I n t e g e r n u m b e r O f V a r i a b l e s ) throws
ClassNotFoundException {
2 numberOfVariables = numberOfVariables . intValue ( ) ;
3 numberOfObjectives = 2 ;
4 numberOfConstraints = 0 ;
5 problemName = ” Kursawe ” ;
6
7 upperLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
8 lowerLimit = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
9
10 f o r ( i n t i = 0 ; i < n u m b e r O f V a r i a b l e s ; i ++) {
11 l o w e r L i m i t [ i ] = −5.0 ;
12 upperLimit [ i ] = 5.0 ;
13 } // f o r
14
15 i f ( s o l u t i o n T y p e . compareTo ( ” B i n a r y R e a l ” ) == 0 )
16 s o l u t i o n T y p e = new B i n a r y R e a l S o l u t i o n T y p e ( t h i s ) ;
17 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” Real ” ) == 0 )
18 s o l u t i o n T y p e = new R e a l S o l u t i o n T y p e ( t h i s ) ;
19 e l s e i f ( s o l u t i o n T y p e . compareTo ( ” ArrayReal ” ) == 0 )
20 s o l u t i o n T y p e = new A r r a y R e a l S o l u t i o n T y p e ( t h i s ) ;
21 else {
22 System . out . p r i n t l n ( ” E r r o r : s o l u t i o n t y p e ” + s o l u t i o n T y p e + ” i n v a l i d ” ) ;
23 System . e x i t ( −1) ;
24 }
25 } // Kursawe

We can observe that at the end of the constructor, we have added the ArrayRealSolutionType as a
third choice of solution representation to represent the problem. The point now is that accessing directly
the decision variables of the problem is cumbersome, because we must distinguish what kind of solution
type we are used. The use of the XReal wrapper simplifies this task, as we can see in the evaluate()
method:
1 p u b l i c v o i d e v a l u a t e ( S o l u t i o n s o l u t i o n ) throws JMException {
2 XReal v a r s = new XReal ( s o l u t i o n ) ;
3
4 d o u b l e aux , x i , x j ; // a u x i l i a r y v a r i a b l e s
5 d o u b l e [ ] f x = new d o u b l e [ 2 ] ; // f u n c t i o n v a l u e s
6 d o u b l e [ ] x = new d o u b l e [ n u m b e r O f V a r i a b l e s ] ;
7 f o r ( i n t i = 0 ; i < n u m b e r O f V a r i a b l e s ; i ++)
8 x [ i ] = vars . getValue ( i ) ;
9
10 fx [ 0 ] = 0.0 ;
11 f o r ( i n t var = 0 ; var < numberOfVariables − 1 ; v a r++) {
12 x i = x [ var ] ∗ x [ var ] ;
13 x j = x [ v a r +1] ∗ x [ v a r +1] ;
14 aux = ( − 0 . 2 ) ∗ Math . s q r t ( x i + x j ) ;
15 f x [ 0 ] += ( − 1 0 . 0 ) ∗ Math . exp ( aux ) ;
16 } // f o r
17
18 fx [ 1 ] = 0 . 0 ;
19
20 f o r ( i n t v a r = 0 ; v a r < n u m b e r O f V a r i a b l e s ; v a r++) {
21 f x [ 1 ] += Math . pow ( Math . abs ( x [ v a r ] ) , 0 . 8 ) +
22 5 . 0 ∗ Math . s i n ( Math . pow ( x [ v a r ] , 3 . 0 ) ) ;
23 } // f o r
68 CHAPTER 7. WHAT ABOUT’S

24
25 solution . setObjective (0 , fx [ 0 ] ) ;
26 solution . setObjective (1 , fx [ 1 ] ) ;
27 } // e v a l u a t e

Now, the wrapper encapsulates the access to the solutions, by using the getValue(index) method.
We must note that using the XReal wrapper implies that all the operators working with real values must
use it too (e.g., the real crossover and mutation operators). Attention must be paid when requesting
information about parameters of the problems, as the number of variables. This information is obtained
typically by invoking the getNumberOfVariables() on the problem to be solved, which in turn returns
the value of the state variable numberOfVariables . However, while this works properly when using
RealSolutionType, that method returns a value of 1 when using ArrayRealSolutionType. Let us recall
that we are replacing N variables by one variable composed of an array of size N. To avoid this issue,
the ArrayRealSolutionType() method of class XReal must be used.
To give an idea of the kind of benetifs of using the optimized type ArrayReal, we have executed
NSGA-II to solve the ZDT1 problem with 1000 and 5000 variables (the default value is 30). The target
computer is a MacBook with 2GHz Intel Core 2 Duo, 4GB 1067 MHZ DDR3 RAM, running Snow
Leopard; the version of the JDK is 1.6.0 17. The computing times of the algorithm when using the
RealSolutionType and ArrayRealSolutionType solutions types when solving the problem with 1000
variables are 12.5s and 11.4s, respectively; in the case of the problem with 5000 variables, the times are
90s and 69s, respectively.
On the other hand, if we configure ZDT1 with 10,000 variables, the program fails reporting an out-of-
memory error when using RealSolutionType, while it runs properly when using the optimized type. The
error memory can be fixed easily by including the proper flags when launching the Java virtual machine
(e.g., java -Xmx512M java.experiments.Main NSGAII ZDT1), but this is an example illustrating that
the memory savings resulting of using an optimized type can be significant.
Chapter 8

Versions and Release Notes

This manual starts with jMetal 2.0, released on December 2008. We detail next the release notes, new
features, and changes in the manual in the current release, jMetal 4.5, from the previous versions.

8.1 Version 4.5 (21st January 2014)


Release notes
As in previous releases, we include new algorithms, new problems, and bug fixing.

New features
• Support for configuring metaheuristics parameter settings from properties files (see Section 6.7).

• New algorithms: adaptive and random NSGA-II [24].

• New single-objective algorithms: CMA-ES1 and pgGA (a parallel generational single-objective


GA).

• New quality indicator: R2.

• The FastHypervolume class, which implements the WFG algorithm for computing the Hypervol-
ume quality indicator (Section 6.6). This class includes an optimized algorithm for calculating the
Hypervolume for problems having two objectives.

• New versions of the SMPSOhv and SMS-EMOA (named FastSMSEMOA) metaheuristics, which take
advantage of the FastHypervolume class when solving bi-objective problems.

• Two new single-objective problems: Rosenbrock and Rastrigin.

• The AbySS algorithm has been adapted to use ArrayReal encoding. This has implied to modify
the Distance class.

Bugs
• Fixed a bug in method Solution.indexBest().

• Fixed a bug in the Settings classes (thanks to Martin Dohr).


1 Contribution of Esteban López-Camacho

69
70 CHAPTER 8. VERSIONS AND RELEASE NOTES

Additions and Changes to the Manual


• Two new sections (Section 6.6 and Section 6.7) have been included.

• Section 4.4.3 has been updated.

8.2 Version 4.4 (23rd July 2013)


Release notes
As in previous releases, we include new algorithms, new problems, and bug fixing.

New features
• New algorithm: dMOPSO [39].

• New algorithm: SMPSOhv, a variant of SMPSO based on using the hypervolumen indicator [24].

• New algorithm: cMOEAD, a variant of MOEAD adopting a constrain handling scheme based
on [1].

• New problem: mQAP (multiobjective Quadratic Assignment Problem).

• New statistical test: Friedman (see Section 4).

• Added SolutionSet.printFeasibleFUN() and SolutionSet.printFeasibleVAR to write the fea-


sible solutions in files when the solved problems have side-constraints2 .

• New Settings classes have been added (cMOEAD Settings, dMOPSO Settings, MOCHC Settings,
NSGAIIPermutation Settings, SMPSOhv Settings).

Bugs
• Fixed a bug in class SolutionComparator.

• Fixed a bug in method Solution.getNumberOfBits()3 .

• Fixed a bug in class AbYSS.

Additions and Changes to the Manual


• The Overview chapter 1 has been updated.

• jMetal setup with IntelliJ IDEA (Section 2.5).

• The Experimentation chapter 4 has been updated.

8.3 Version 4.3 (3rd January 2013)


Release notes
In this release we include a basic support to implement parallel multi-objective metaheuristics by allowing
the parallel evaluation of solutions. To parallel version of NSGA-II and SMPSO, named pNSGAII and
pSMPSO, respectively, have been developed using this feature.
2 Contribution of Francisco Luna
3 Thanks to Rafael Olaechea
8.4. VERSION 4.2 (14T H NOVEMBER 2012) 71

New features
• New package jmetal.util.parallel, including the IParallelEvaluator interface and the MultithreadedEvaluato
class.
• Two new algorithms: pNSGAII and pSMPSO, including the pNSGAII main, pNSGAII Settings,
pSMPSO main, and pSMPSO Setting classes.
• Two new problems: Bihn2 and FourBarTruss.

Additions and Changes to the Manual


• A new chapter has been added (Chapter 5).

8.4 Version 4.2 (14th November 2012)


Release notes
In this release a new algorithm has been included, a new problem has been added, some methods have
been improved, a new random number generator is available, and a new utility is provided.

New features
• The MOTSP problem, a multi-objective version of the TSP.
• The NSGAII MOTSP main class, aimed to solve the MOTSP problem.
• A new random number generator based on the Mersenne Twister (see Subsection 6.3)4 .
• The Experiment class can obtain reference fronts when they are not available in advance5 . See
Section 4.6.
• A new utility, called jmetal.util.ExtractParetoFront, has been added to extract the non-
dominated solutions front a file containing both dominated and non-dominated solutions. See
Section 6.5.

Bugs
Some minor bugs have been fixed.

Additions and Changes to the Manual


• Section 6.3 has been added.
• Section 6.5 has been added.

Performance improvements
The following improvements affecting performance has been included:
• The fast non-dominated sorting algorithm has been optimized6 .
• The thread model in the Experiment class has been changed to allow a better use of multicore
CPUs7 .
4 Contribution of Jean Laurent Hippolyte
5 Contribution of Jorge Rodrı́guez
6 Contribution of Guillaume Jacquenot
7 Contribution of Jorge Rodrı́guez
72 CHAPTER 8. VERSIONS AND RELEASE NOTES

8.5 Version 4.0 (10th November 2011)


Release notes
In this release the package structure has been modified, the Operator class has been redefined, some
methods have been improved, and some bugs have been fixed.

New features
• The former jmetal.base package has been renamed as jmetal.core. The reason is to try to keep
the same package structure as in the C# version of jMetal we are developing8 ; base is a reserved
keyword in C#.
• The package structure has been modified. Now, the former package jmetal.base.operator be-
comes jmetal.operators, and jmetal.base.variable and jmetal.base.solutionType are now
jmetal.encodings.variable and jmetal.encodings.solutionType.
• New encoding: ArrayRealAndBinarySolutionType allows a representation combining a array of
reals and a binary string.
• Two new operators: PolinomialBitFlipMutation (Package: jmetal.operators.mutation) and
SBXSinglePointCrossver (Package: jmetal.operators.crossover), intended to be applied to
ArrayRealAndBinarySolutionType solutions.
• The operators can be configured now when their constructor is invoked (see Subsection 3.1.2, which
includes the example of the SBX crossover in Listing 3.3).

Removed features
The GUI is not available in this release.

Bugs
Many minor bugs have been fixed.

Additions and Changes to the Manual


• Section 6 has been updated.
• Section 3 has been updated.

Performance improvements
Many sorting methods used in the framework have been optimized, and the fact that the new approach
to create operators is more efficient (previously, all the operator parameters where checked each time on
operator was invoked) has lead to significant performance improvements. To give an idea of the benefits
of using the new version in terms of computing time, we include two examples. The target computer is
a MacBook with 2GHz Intel Core 2 Duo, 4GB 1067 MHZ DDR3 RAM, running Mac OS X Lion 10.7.2
(11C74); the version of the JDK is 1.6.0 26.
In the first example, we run NSGA-II to solve problem Kursawe using typical settings (population
size: 100, number of evaluations: 25000). The times obtained when running the algorithms with version
3.1 and 4.0 of jMetal are about 2.6s and 1.8s, respectively. In the second example, we execute MOEAD
to solve the LZ09 F1 problem with standard settings (population size: 300, number of evaluations:
300000), getting times in the order of 14.7s and 3.3s , respectively. That is, the time reductions are
about the 30% and the 77% in the two examples, respectively.
8 http://jmetalnet.sourceforge.net
8.6. VERSION 3.1 (1ST OCTOBER 2010) 73

8.6 Version 3.1 (1st October 2010)


Release notes
This release includes new algorithms (single and multi-objective) and some bugs have been fixed.

New features
• A new solution type: ArrayRealAndBinarySolutionType (Package: jmetal.base.solutionType).

• Two new operators: PolinomialBitFlipMutation (Package: jmetal.base.operator.mutation)


and SBXSinglePointCrossver (Package: jmetal.base.operator.crossover), intended to be
applied to ArrayRealAndBinarySolutionType solutions.

• The SMS-EMOA algorithm (contributed by Simon Wessing).

• A single-objective version of a PSO algorithm.

• MOEA/D-DRA, a version of MOEA/D presented in CEC09.

Bugs
Bugs in the following packages and classes have been fixed:

• Class jmetal.base.operator.crossover.PMXCrossover

Additions and Changes to the Manual


• Section 1 has been modified.

8.7 Version 3.0 (28th February 2010)


Release notes
This release contains changes in the architecture of the framework, affecting the way the solution rep-
resentations are enconded (by using solution types). Other significant contribution the jmetal.gui
package, which includes two graphical tools.

New features
• A new approach to define solution representations (Section 3.1.1).

• Two new variable representations: ArrayInt and ArrayReal (packages: jmetal.base.variable.ArrayInt


and jmetal.base.variable.ArrayReal).

• Two wrapper classes, XReal and XInt, to encapsulate the access to the different included repre-
sentations of real and integer types, respectively.

• Two graphical tools: the Simple Execution Support GUI (jmetal.gui.SimpleExecutionSupportGUI)


and the Experiment Support GUI (jmetal.gui.ExperimentsSupportGUI).

• Single-objective versions of a number of genetic algorithms (steady-state, generational, synchronous


cellular, asynchronous cellular), differential evolution, and evolution strategies (elitist and non-
elitist).

• A parallel version of MOEA/D.


74 CHAPTER 8. VERSIONS AND RELEASE NOTES

Bugs
Bugs in the following packages and classes have been fixed:

• Class jmetal.metaheuristics.moead.MOEAD

Additions and Changes to the Manual


• Section 3 has been modified

• Added Chapter 4

• Added Chapter 6

• Added Chapter 7

• Chapter FAQ has been removed

8.8 Version 2.2 (28nd May 2009)


Release notes
This release contains as main contributions two new algorithms (random search and a steady-state
version of NSGA-II), an update of the experiments package, and several bugs has been fixed.

New features
• A random search algorithm (package: jmetal.metaheuristic.randomSearch).

• A steady-state version of NSGA-II (class: jmetal.metaheuristic.nsgaII.ssNSGAII). To con-


figure and running the algorithm, the jmetal.metaheuristic.nsgaII.NSGAII main can be used,
indicating ssNSGAII instead of NSGAII in line 84; alternatively, a ssNSGAII Settingss class is
available in jmetal.experiments.settings. We used this algorithm in [8].

• The experiments package allows to generate latex tables including the application of the Wilcoxon
statistical test to the results of jMetal experiments. Additionally, it can be indicated whether to
generate notched or not notched boxplots (Chapter 4).

• A first approximation to the use of threads to run experiments in parallel has been included
(Section 4.8).

Bugs
Bugs in the following packages have been fixed:

• jmetal.problems.ConstrEx.

• jmetal.metaheuristics.paes.Paes main.

• jmetal.metaheuristics.moead.MOEAD.

Additions and Changes to the Manual


• Chapter 4
8.9. VERSION 2.1 (23RD FEBRUARY 2009) 75

8.9 Version 2.1 (23rd February 2009)


Release notes
This release contains as main contribution the support of automatically generating R9 scripts, which
when compiled produce figures representing boxplots of the results.

New features
• Class jmetal.experiments.Experiment: method generateRScripts().
• The IBEA algorithm [46] (package: jmetal.metaheuristic.ibea). This algorithm is included for
testing purposes; we have not validated the implementation yet.

Bugs
• A bug in the jmetal.base.operator.crossover.SinglePointCrossover class has been fixed.

8.10 Version 2.0 (23rd December 2008)


Release notes
This release contains as main contribution the package jmetal.experiments, which contains a set
of classes intended to facilitate carrying out performance studies with the algorithms included in the
framework. As a consequence, a new class jmetal.experiments.Settings has been defined to allow to
set the parameters of the metaheuristics in a separate class, so that the configurations of the algorithms
can be reused easily. In versions previous to jMetal 2.0, the settings were specified in a main Java
program associated to each technique, what made the reusing of the algorithm configurations difficult.

New features
• Package jmetal.experiments.

• The Additive Epsilon indicator (package: jmetal.qualityIndicators).


• CEC2008 benchmark (package: jmetal.problems).

Known Bugs
Additions and Changes to the Manual

9 http://www.r-project.org/
76 CHAPTER 8. VERSIONS AND RELEASE NOTES
Bibliography

[1] M. Asafuddoula, T. Ray, R. Sarker, and K. Alam. An adaptive constraint handling approach
embedded moea/d. In Evolutionary Computation (CEC), 2012 IEEE Congress on, pages 1–8,
2012.

[2] S. Bleuler, M. Laumanns, L. Thiele, and E. Zitzler. PISA — a platform and programming language
independent interface for search algorithms. In C. M. Fonseca, P. J. Fleming, E. Zitzler, K. Deb,
and L. Thiele, editors, Evolutionary Multi-Criterion Optimization (EMO 2003), Lecture Notes in
Computer Science, pages 494 – 508, Berlin, 2003. Springer.

[3] D.W. Corne, N.R. Jerram, J.D. Knowles, and M.J. Oates. PESA-II: Region-based selection in
evolutionary multiobjective optimization. In Genetic and Evolutionary Computation Conference
(GECCO-2001), pages 283–290. Morgan Kaufmann, 2001.

[4] K. Deb. Multi-objective optimization using evolutionary algorithms. John Wiley & Sons, 2001.

[5] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scalable test problems for evolutionary multiob-
jective optimization. In Ajith Abraham, Lakhmi Jain, and Robert Goldberg, editors, Evolutionary
Multiobjective Optimization. Theoretical Advances and Applications, pages 105–145. Springer, USA,
2005.

[6] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan. A fast and elitist multiobjective
genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.

[7] J.J. Durillo, A.J. Nebro, and E. Alba. The jMetal framework for multi-objective optimization:
Design and architecture. In CEC 2010, pages 4138–4325, Barcelona, Spain, July 2010.

[8] J.J. Durillo, A.J. Nebro, F. Luna, and E. Alba. On the effect of the steady-state selection scheme
in multi-objective genetic algorithms. In 5th International Conference, EMO 2009, volume 5467 of
Lecture Notes in Computer Science, pages 183–197, Nantes, France, April 2009. Springer Berlin /
Heidelberg.

[9] J.J. Durillo, A.J. Nebro, F. Luna, B. Dorronsoro, and E. Alba. jMetal: a Java framework for de-
veloping multi-objective optimization metaheuristics. Technical Report ITI-2006-10, Departamento
de Lenguajes y Ciencias de la Computación, University of Málaga, E.T.S.I. Informática, Campus
de Teatinos, 2006.

[10] Juan J. Durillo and Antonio J. Nebro. jmetal: A java framework for multi-objective optimization.
Advances in Engineering Software, 42(10):760 – 771, 2011.

[11] Juan J. Durillo, Antonio J. Nebro, Francisco Luna, and Enrique Alba. Solving three-objective
optimization problems using a new hybrid cellular genetic algorithm. In G. Rudolph, T. Jensen,
S. Lucas, C. Poloni, and N. Beume, editors, Parallel Problem Solving from Nature - PPSN X, volume
5199 of Lecture Notes in Computer Science, pages 661–670. Springer, 2008.

77
78 BIBLIOGRAPHY

[12] M. Emmerich, N. Beume, and B. Naujoks. An emo algorithm using the hypervolume measure
as selection criterion. In C.A. Coello, A. Hernández, and E. Zitler, editors, Third International
Conference on Evolutionary MultiCriterion Optimization, EMO 2005, volume 3410 of LNCS, pages
62–76. Springer, 2005.
[13] H. Eskandari, C. D. Geiger, and G. B. Lamont. FastPGA: A dynamic population sizing approach for
solving expensive multiobjective optimization problems. In S. Obayashi, K. Deb, C. Poloni, T. Hi-
royasu, and T. Murata, editors, Evolutionary Multi-Criterion Optimization. 4th International Con-
ference, EMO 2007, volume 4403 of Lecture Notes in Computer Science, pages 141–155. Springer,
2007.
[14] C.M. Fonseca and P.J. Flemming. Multiobjective optimization and multiple constraint handling
with evolutionary algorithms - part ii: Application example. IEEE Transactions on System, Man,
and Cybernetics, 28:38–47, 1998.
[15] D. Greiner, J.M. Emperador, G. Winter, and B. Galván. Improving computational mechanics opti-
mum design using helper objectives: An application in frame bar structures. In S. Obayashi, K. Deb,
C. Poloni, T. Hiroyasu, and T. Murata, editors, Fourth International Conference on Evolutionary
MultiCriterion Optimization, EMO 2007, volume 4403 of Lecture Notes in Computer Science, pages
575–589, Berlin, Germany, 2006. Springer.
[16] S. Huband, P. Hingston, L. Barone, and L. While. A review of multiobjective test problems and
a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5):477–506,
October 2006.
[17] J. Knowles, L. Thiele, and E. Zitzler. A Tutorial on the Performance Assessment of Stochastic
Multiobjective Optimizers. Technical Report 214, Computer Engineering and Networks Laboratory
(TIK), ETH Zurich, 2006.
[18] J. D. Knowles and D. W. Corne. Approximating the nondominated front using the pareto archived
evolution strategy. Evolutionary Computation, 8(2):149–172, 2000.
[19] S. Kukkonen and J. Lampinen. GDE3: The third evolution step of generalized differential evolution.
In IEEE Congress on Evolutionary Computation (CEC’2005), pages 443 – 450, 2005.
[20] A. Kurpati, S. Azarm, and J. Wu. Constraint handling improvements for multi-objective genetic
algorithms. Structural and Multidisciplinary Optimization, 23(3):204–213, 2002.
[21] F. Kursawe. A variant of evolution strategies for vector optimization. In H.P. Schwefel and
R. Männer, editors, Parallel Problem Solving for Nature, pages 193–197, Berlin, Germany, 1990.
Springer-Verlag.
[22] H. Li and Q. Zhang. Multiobjective optimization problems with complicated pareto sets, moea/d
and nsga-ii. IEEE Transactions on Evolutionary Computation, 12(2):284–302, April 2009.
[23] A.J. Nebro, E. Alba, G. Molina, F. Chicano, F. Luna, and J.J. Durillo. Optimal antenna placement
using a new multi-objective chc algorithm. In GECCO ’07: Proceedings of the 9th annual conference
on Genetic and evolutionary computation, pages 876–883, New York, NY, USA, 2007. ACM Press.
[24] A.J. Nebro, J.J. Durillo, and C.A. Coello Coello. Analysis of leader selection strategies in a multi-
objective particle swarm optimizer. In Evolutionary Computation (CEC), 2013 IEEE Congress on,
pages 3153–3160, 2013.
[25] A.J. Nebro, J.J. Durillo, J. Garcı́a-Nieto, C.A. Coello Coello, F. Luna, and E. Alba. Smpso:
A new pso-based metaheuristic for multi-objective optimization. In 2009 IEEE Symposium on
Computational Intelligence in Multicriteria Decision-Making (MCDM 2009), pages 66–73. IEEE
Press, 2009.
BIBLIOGRAPHY 79

[26] A.J. Nebro, J.J. Durillo, F. Luna, B. Dorronsoro, and E. Alba. Design issues in a multiobjective
cellular genetic algorithm. In S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, editors,
Evolutionary Multi-Criterion Optimization. 4th International Conference, EMO 2007, volume 4403
of Lecture Notes in Computer Science, pages 126–140. Springer, 2007.

[27] AJ Nebro, JJ Durillo, F Luna, B Dorronsoro, and E Alba. Mocell: A cellular genetic algorithm for
multiobjective optimization. Int. J. Intell. Syst., 24(7):726–746, 2009.

[28] A.J. Nebro, J.J. Durillo, C.A. Coello Coello M. Machı́n, and B. Dorronsoro. A study of the combi-
nation of variation operators in the nsga-ii algorithm. In 15th Conference of the Spanish Association
for Artificial Intelligence, CAEPIA 2013, Proceedings of, pages 269–278, 2013.

[29] Antonio J. Nebro, Juan J. Durillo, C.A. Coello Coello, Francisco Luna, and Enrique Alba. Design
issues in a study of convergence speed in multi-objective metaheuristics. In G. Rudolph, T. Jensen,
S. Lucas, C. Poloni, and N. Beume, editors, Parallel Problem Solving from Nature - PPSN X, volume
5199 of Lecture Notes in Computer Science, pages 763–772. Springer, 2008.

[30] Antonio J. Nebro, Francisco Luna, Enrique Alba, Bernabé Dorronsoro, Juan J. Durillo, and Andreas
Beham. AbYSS: Adapting Scatter Search to Multiobjective Optimization. IEEE Transactions on
Evolutionary Computation, 12(4), August 2008.

[31] A. Osyczka and S. Kundo. A new method to solve generalized multicriteria optimization problems
using a simple genetic algorithm. Structural Optimization, 10:94–99, 1995.

[32] T. Ray, K. Tai, and K.C. Seow. An Evolutionary Algorithm for Multiobjective Optimization.
Engineering Optimization, 33(3):399–424, 2001.

[33] M. Reyes and C.A. Coello Coello. Improving PSO-based multi-objective optimization using crowd-
ing, mutation and -dominance. In C.A. Coello, A. Hernández, and E. Zitler, editors, Third In-
ternational Conference on Evolutionary MultiCriterion Optimization, EMO 2005, volume 3410 of
LNCS, pages 509–519. Springer, 2005.

[34] J.D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. In J.J.
Grefensttete, editor, First International Conference on Genetic Algorithms, pages 93–100, Hillsdale,
NJ, 1987.

[35] N. Srinivas and K. Deb. Multiobjective function optimization using nondominated sorting genetic
algorithms. Evolutionary Computation, 2(3):221–248, 1995.

[36] M. Tanaka, H. Watanabe, Y. Furukawa, and T. Tanino. Ga-based decision support system for
multicriteria optimization. In Proceedings of the IEEE International Conference on Systems, Man,
and Cybernetics, volume 2, pages 1556–1561, 1995.

[37] D. A. Van Veldhuizen and G. B. Lamont. Multiobjective Evolutionary Algorithm Research: A


History and Analysis. Technical Report TR-98-03, Dept. Elec. Comput. Eng., Graduate School of
Eng., Air Force Inst. Technol., Wright-Patterson, AFB, OH, 1998.

[38] L. While, L. Bradstreet, and L. Barone. A fast way of calculating exact hypervolumes. Evolutionary
Computation, IEEE Transactions on, 16(1):86–95, 2012.

[39] S. Zapotecas and C A. Coello Coello. A multi-objective particle swarm optimizer based on decom-
position. In GECCO, pages 69–76, 2011.

[40] Q. Zhang, A. Zhou, S. Z. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari. Multiobjective optimiza-
tion test instances for the cec 2009 special session and competition. Technical Report CES-487,
University of Essex and Nanyang Technological University, Essex, UK and Singapore, September
2008.
80 BIBLIOGRAPHY

[41] A. Zhou, Y. Jin, Q. Zhang, B. Sendhoff, and E. Tsang. Combining model-based and genetics-based
offspring generation for multi-objective optimization using a convergence criterion. In 2006 IEEE
Congress on Evolutionary Computation, pages 3234–3241, 2006.
[42] E. Zitzler, K. Deb, and L. Thiele. Comparison of multiobjective evolutionary algorithms: Empirical
results. Evolutionary Computation, 8(2):173–195, Summer 2000.

[43] E. Zitzler, M. Laumanns, and L. Thiele. SPEA2: Improving the strength pareto evolutionary
algorithm. In K. Giannakoglou, D. Tsahalis, J. Periaux, P. Papailou, and T. Fogarty, editors,
EUROGEN 2001. Evolutionary Methods for Design, Optimization and Control with Applications to
Industrial Problems, pages 95–100, Athens, Greece, 2002.
[44] E. Zitzler and L. Thiele. Multiobjective evolutionary algorithms: a comparative case study and the
strength pareto approach. IEEE Transactions on Evolutionary Computation, 3(4):257–271, 1999.
[45] E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, and V.G. Da Fonseca. Performance assess-
ment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary
Computation, 7:117–132, 2003.
[46] Eckart Zitzler and Simon Künzli. Indicator-based selection in multiobjective search. In Xin Yao
et al., editors, Parallel Problem Solving from Nature (PPSN VIII), pages 832–842, Berlin, Germany,
2004. Springer-Verlag.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy