Ship Design2
Ship Design2
Ship Design2
Earlier Research
Figure 2.3.1: C-Job Naval Architects Design Circle. Four different levels of accu-
racy, with eight different design aspects to consider in the optimization process.
9
Ship Design Applications Chapter 2. Earlier Research
Strength Typically when cargo gets loaded to a ship, the ship bends a bit
in the middle. This is the case because when a ship is fully loaded, the
water gives an upward pressure to the ship while the gravity on the
cargo now pushes the ship downwards in the middle, this phenomenon
is better known as sagging ((1) in Figure 2.3.2). The inverse of sagging
is named hogging ((2) in Figure 2.3.2) and mainly occurs when the
ship is empty. How much the ship is allowed to sag and hog is again
defined by the regulating authorities. The ship should be made strong
enough so it does not exceed the maximum stress limit.
Figure 2.3.2: Ship hull is sagging in (1) and hogging in (2). Note that the
bending is exaggerated.
Weight & Cost For a ship yard the weight goes hand in hand with the
cost of a ship. This is the case because the required amount of of steel
that needs to be bought and put together gives a rough indication of
how much the ship will cost to build. Therefore, the weight of a ship
10
Ship Design Applications Chapter 2. Earlier Research
Figure 2.3.3: Space reservation of a ship. Where every color defines a differ-
ent space.
Motions Motion deals with how the ship reacts to waves and wind. It should
not roll, pitch, yaw, heave, sway or surge (see Figure 2.3.4) too much
after a single or a sequential number of waves. Passengers, for example,
get seasick when there is too much heave/pitch and equipment can
break when there is too much rolling back and forth after a number of
sequential waves.
11
Ship Design Applications Chapter 2. Earlier Research
12
–The biggest room in the world is room for
improvement.
Helmut Schmidt
3
Ship Design Optimization
Problem
13
Design Stages Chapter 3. Ship Design Optimization Problem
Figure 3.1.1: Four ship design stages: Concept-, basic-, functional- and detail
design.
14
Design Stages Chapter 3. Ship Design Optimization Problem
15
Problem Description Chapter 3. Ship Design Optimization Problem
Definition 1.
(Multi-Objective Problem [6]) : A general Multi-Objective Problem
is defined as minimizing F (~x) = (f1 (~x), . . . , fk (~x)) subject to gi (~x) ≤ 0, i =
{1, . . . , m} and ~x ∈ Ω. Multi-Objective problem solutions minimize the
components of the vector F (~x) to find a set of Pareto optimal solutions,
where ~x is a n-dimensional decision variable vector ~x = (x1 , . . . , xn ) from
some search space Ω. It is noted that gi (~x) ≤ 0 represent constraints that
must be fulfilled while minimizing F (~x) and Ω contains all possible ~x that
can be used to satisfy an evaluation of F (~x).
Definition 2.
(Pareto Optimality [6]) : A feasible solution ~x ∈ Ω is said to be Pareto
Optimal with respect to Ω if and only if there is no x~2 for which ~v = F (x~2 ) =
[f1 (x~2 ), . . . , fk (x~2 )] dominates ~u = F (~x) = [f1 (~x), . . . , fk (~x)].
Definition 3.
(Pareto Dominance [6]) : A vector ~u = [u1 , . . . , uk ] is said to dominate
another vector ~v = [v1 , . . . , vk ] if and only if ~u is partially less than ~v , i.e.,
∀i ∈ {1, . . . , k}, ui ≤ vi ∧ ∃i ∈ {1, . . . , k} : ui < vi . The following symbol is
used to indicate Pareto Dominance: .
Definition 4.
(Pareto Optimal Set [6]) : For a given Multi-Objective Problem, F (~x),
the Pareto Optimal Set, P ∗ , is defined as:
16
Problem Description Chapter 3. Ship Design Optimization Problem
x1
..
~x = . (3.2.2)
xn
All the possible combinations in between a predefined lower and upper bound
of ~x together is called the search space denoted by Ω. Of course due to the
constraints, not every combination will lead to a feasible solution.
The dredger (Figure 3.2.1) has the following decision variables: ∆breadth ,
∆length , foreship length, hopper length extension, hopper breadth, hopper
height. Here ∆ means a change opposed to the original design.
Figure 3.2.1: Trailer Suction Hopper Dredger designed by C-Job Naval Architects,
with the design variables annotated.
The overall length and breadth of the hull can be transformed with the help
of Free Form Deformation (FFD) [34]. For this transformation a box is drawn
around the hull. Any point on the box can be moved in all directions and
the parent surface that is inside this box will be transformed accordingly.
This FFD can be achieved by changing the ∆breadth and ∆length parameter
which then applies the FFD on the original concept design.
The part of the ship from the most forward bulkhead to the front is called
the foreship. The location of this last bulkhead can be changed by varying
the foreship length decision variable.
The cargo space, where the dredged material is dumped in, is called the
hopper. Changes can be made to the height, the breadth, and to the length
17
Problem Description Chapter 3. Ship Design Optimization Problem
HOPPER WIDTH
FORESHIP LENGTH
SHIP LENGTH
SHIP BREADTH
Figure 3.2.2: Parallel coordinate plot with infeasible solutions (red), feasible so-
lutions (blue), Non-dominanated solutions (green), every vertical axes represents a
decision variable.
3.2.2 Constraints
The constraints can be expressed in terms of function inequalities, where
one function inequality represents one of the m constraints. All inequality
constraints can then be expressed the following way:
18
Problem Description Chapter 3. Ship Design Optimization Problem
3.2.3 Objectives
The objectives of a ship design optimization problem are typically conflicting
and non-commensurable. As a consequence there is usually not one unique,
perfect solution but a set of alternative, so called non-dominated solutions.
This non-dominated solution set contains good compromises between the
objective functions fj (~x), j = 1, . . . , k. Together they form the vector of
functions to be minimized:
The non-dominated set of solutions together form the Pareto frontier, where
Pareto optimality is defined by Definition 2 and Pareto dominance is defined
by Definition 3.
19
Problem Description Chapter 3. Ship Design Optimization Problem
20
–The inspiration you seek is already within you. Be
silent and listen.
Jalāl ad-Dı̄n Muhammad Rūmı̄
4
Proposed Solution
4.1 CEGO
The proposed algorithm consist of several crucial components: the initial sam-
pling, the training of surrogate models for both objectives and constraints,
defining the Pareto frontier, defining the S-Metric Selection criterion, opti-
mizing the infill criterion subject to the constraints, the evaluation of the
found local optima, and the adjustment of the allowed constraint violations.
A general overview of the algorithm is given in the CEGO psuedocode that
can be found in Algorithm 1. Additionally, a flowchart of the algorithm is
shown in Figure 4.1.1. Lastly the individual components of the algorithm
are described in detail in the description following.
21
CEGO Chapter 4. Proposed Solution
22
CEGO Chapter 4. Proposed Solution
Objective Models The objective values are used to train the objective sur-
rogate models. The objective surrogate models used are Kriging [23]
(often also called Gaussian Process Regression models). For every ob-
jective function a separate Kriging model is fitted. Kriging treats every
unknown objective function f as the combination of a centered Gaus-
sian Process (x) of zero mean with an unknown constant trend µ. The
advantage of using Kriging is that in addition to the predicted mean
y(x), the predicted uncertainty, called the Kriging variance σ(x), is
provided. The Kriging variance can be exploited in the optimization
procedure (see description Criterion for optimization).
In Figure 4.1.2, 4.1.3, 4.1.4, and 4.1.5 a Kriging training process is
presented for the y = x2 function. As can be seen from the figures,
the more points we use to train the Kriging model, the smaller the
variance gets and the more precise the approximation of the kriging
model becomes.
Kriging approximation with Gaussian smoothning Kriging approximation with Gaussian smoothning
1.0 1.2
1.0
0.5
0.8
0.0
0.6
y
y
0.5
0.4
1.0 training points training points
predicted values 0.2 predicted values
actual values actual values
1.5 variance 0.0 variance
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
x x
Figure 4.1.2: Kriging with Gaussian Figure 4.1.3: Kriging with Gaussian
smoothing function approximation of smoothing function approximation of
y = x2 . Training points used: x = 0.5, y = x2 . Training points used: x = −1,
and x = 1. x = 0.5, and x = 1.
Kriging approximation with Gaussian smoothning Kriging approximation with Gaussian smoothning
1.0
1.0
0.8
0.8
0.6
0.6
y
0.4
0.4
training points 0.2 training points
0.2 predicted values predicted values
actual values actual values
0.0 variance 0.0 variance
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
x x
Figure 4.1.4: Kriging with Gaussian Figure 4.1.5: Kriging with Gaussian
smoothing function approximation of smoothing function approximation of
y = x2 . Training points used: x = −1, y = x2 . Training points used: x = −1,
x = 0.25 x = 0.5, and x = 1. x = 0, x = 0.25, x = 0.5, and x = 1.
23
CEGO Chapter 4. Proposed Solution
In the first few iterations, the CRBF model might not fit the constraint
function very well. Therefore, a violation of the constraints is allowed.
The magnitude of the allowed violation decreases as more feasible
solutions are found.
Now, lets assume that the objective function y = x2 is subject to
the constraint function g = −x3 − 14 where g should be smaller then
or equal to 0 to be feasible. The constraint now limits the available
ranges of values that the Kriging model is able to choose for x. In
Figure 4.1.6, 4.1.7, 4.1.8, and 4.1.9 the available choices in each step
that can be made by the Kriging model is visualized.
0.50 0.50
0.75 0.75
1.00 1.00
1.25 1.25
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
x x
Figure 4.1.6: CRBF function approxi- Figure 4.1.7: CRBF function approxi-
mation of y = −x3 − 41 . Training points mation of y = −x3 − 41 . Training points
used: x = 0.5, and x = 1. used: x = −1, x = 0.5, and x = 1.
24
CEGO Chapter 4. Proposed Solution
g
0.50 0.50
0.75 0.75
1.00 1.00
1.25 1.25
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
x x
Figure 4.1.8: CRBF function approxi- Figure 4.1.9: CRBF function approxi-
mation of y = −x3 − 41 . Training points mation of y = −x3 − 41 . Training points
used: x = −1, x = 0.25 x = 0.5, and used: x = −1, x = 0, x = 0.25, x = 0.5,
x = 1. and x = 1.
Here max(Λ) ~ is the maximum value per objective on the Pareto frontier,
min(Λ)~ is the minimum value per objective on the Pareto frontier, k is
the number of objectives, maxEval the maximum number of allowed
iterations, and eval the number of evaluations executed so far. The final
(hyper)volume multiplied by minus one that ŷpot adds to the Pareto
frontier is the score the S-metric criterion will return. If ŷpot is not
25
CEGO Chapter 4. Proposed Solution
1.8
1.6
Objective 2
26
CEGO Chapter 4. Proposed Solution
Find Local Optima The infill criterion is optimized in such a manner that
it searches for new optimal solutions that do not violate any of the
constraints. This can be done with an optimization algorithm that
is capable of dealing with: multiple decision variables (~x), single ob-
jective problem (S-metric criterion), and multiple constraints (CRBF
functions). In this research the Constrained Optimization by Linear
Approximation (COBYLA) algorithm [31] is used to optimize the ob-
jectives subject to the constraints.
The problem that COBYLA can solve is setup the following way: The
decision variables ~x can be changed within the predefined bounds.
The objective is the S-metric criterion that predicts the additional
(hyper)volume given ~x. The constraints can be interpolated with the
CRBF models given ~x and finally a the DRC constraint is added that
makes sure the solutions that get evaluated are not to close to the
previously found solutions.
COBYLA can deal wich such problems and therefore is able to optimize
the infill criterion under the condition that the constraints are satisfied.
The vector ~x that is expected to be feasible and expected to contribute
the most to the Pareto frontier approximation is then proposed as
new solution. If in the current design space no feasible solution can be
found, the vector ~x with the smallest expected constraint violation is
chosen.
27
CEGO Chapter 4. Proposed Solution
√
Alternatively, increases by 100% when b2 · nc infeasible solutions
are found. This increase of the allowed violation can be explained by
the idea that the allowed violation of the CRBF is probably too tight
and the CRBF models are not yet capable of modelling the constraint
functions good enough yet. By increasing the allowed constraint viola-
tion margin of the CRBF models the constrained area can be explored
more thoroughly so that the fit of the CRBF models can increase. Of
course it does not make sense to explore the already known heavily
violated constrained area, therefore the maximum of is set to 0.02 in
the experiments reported in this thesis.
28
–Failure is just practice for success.
Christopher Hitchens
5
Experimental Evaluation
29
Experimental Setup Chapter 5. Experimental Evaluation
point values are set in such a manner that they represent the maximum
values of interest for the objective functions. In Figure 5.1.1 an example of a
reference point, the Pareto frontier and the volume of an example frontier is
visualized. The algorithm with the highest HV between the fixed reference
point and the Pareto frontier approximation can be defined as the algorithm
that performed the best on the problem.
CEXP
9
8
7
6
Pareto Frontier
5 Reference point
f2
(hyper)volume
4
3
2
1
0.4 0.5 0.6 0.7 0.8 0.9 1.0
f1
Figure 5.1.1: Pareto frontier, reference point and (hyper)volume visualized for the
CEXP function.
The HV already gives a good indication of which algorithm gives the best
result, but what is also of interest is the spread of the obtained solutions on
the Pareto frontier approximation. To examine the spread of the obtained
solutions the Pareto frontier is visualized.
In addition to the HV and the spread the convergence rate of the dif-
ferent algorithms can also be an interesting measure to look at. This is
the case because if we have expensive evaluation functions the number of
allowed function evaluations is typically limited. This means that an algo-
rithm with a high convergence rate is more interesting compared to a not so
slow converging algorithm.
30
Experimental Setup Chapter 5. Experimental Evaluation
31
Algorithm Comparison Chapter 5. Experimental Evaluation
The reference point is set to [5 000, 2]. This is the case because we are only in-
terested in design variations with a smaller resistance coefficient than 2, and
design variations with a smaller steel weight than 5 000 tonnes. Furthermore,
based on 200 random samples, approximately 24% of of the design space is
feasible. The original dredger designed by human experts has an approxi-
mated steel weight of 2 039 tonnes and an estimated resistance coefficient of
1.08.
5.2.1 Hypervolume
As shown in Table 5.2.1, CEGO outperforms NSGA-II, SPEA2 and MOGA
in terms of the HV measure for the final Pareto frontier with the limited
32
Algorithm Comparison Chapter 5. Experimental Evaluation
5.2.2 Spread
In the Figures 5.2.1, 5.2.2, 5.2.3, 5.2.4, 5.2.5 and 5.2.6 the non-dominated
solutions of six typical test functions are visualized. From these figures it can
33
Algorithm Comparison Chapter 5. Experimental Evaluation
clearly be seen that the approximation of the Pareto frontier and the spread
of the CEGO algorithm is better, compared to the other algorithms. Note
that WP has five objectives and that five dimensions are hard to visualize,
therefore the obtained Pareto frontier approximation obtained by the algo-
rithms is visualized as a parallel coordinate plot with the five function values
on the y-axes. Since the WP is a minimization problem we are interested in
as low as possible values on every axes.
CEXP
9
8 CEGO OSY
MOGA 120
7 NSGAII CEGO
6 SPEA2 MOGA
100
NSGAII
5 SPEA2
f2
80
4
f2 60
3
2 40
1 20
0.4 0.5 0.6 0.7 0.8 0.9 1.0
f1 0
250 200 150 100 50
f1
Figure 5.2.1: Pareto frontier obtained
by the four algorithms on CEXP prob- Figure 5.2.2: Pareto frontier obtained
lem. by the four algorithms on OSY problem.
SRD WB
1600 0.040
CEGO CEGO
MOGA 0.035 MOGA
1400 NSGAII NSGAII
0.030
SPEA2 SPEA2
0.025
1200
0.020
f2
f2
1000 0.015
0.010
800 0.005
0.000
3000 3500 4000 4500 5000 5500 0 20 40 60 80 100 120 140
f1 f1
Figure 5.2.3: Pareto frontier obtained Figure 5.2.4: Pareto frontier obtained
by the four algorithms on SRD problem. by the four algorithms on WB problem.
SPD
CEGO
MOGA
NSGAII 350000
SPEA2 400000
450000f3
500000
550000
600000
650000
11000
10000
8.5 9.0 9.5 9000
8000
10.010.511.0 7000 f2
6000
f1 11.512.012.5 5000
Figure 5.2.5: Pareto frontier obtained by the four algorithms on SPD problem.
34
Algorithm Comparison Chapter 5. Experimental Evaluation
WP
24840.11 CEGO
81601.85 1323.70 2.81 14341295.49
MOGA
19975.60 NSGAII
78049.53 1071.65 2.31 11513421.39 SPEA2
f2
f3
f4
f5
Figure 5.2.6: Pareto frontier obtained by the four algorithms on WP problem.
SPEA2
(hyper)volume
2.0 60000
1.5 40000
1.0 CEGO
MOGA 20000
0.5 NSGAII
0.0 SPEA2 0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200
Evaluation Evaluation
Figure 5.2.7: Convergence plot for the Figure 5.2.8: Convergence plot for the
four algorithms on CEXP problem. four algorithms on OSY problem.
35
Dredger Optimization Results Chapter 5. Experimental Evaluation
33
3000000
(hyper)volume
32
(hyper)volume
2000000 31
Figure 5.2.9: Convergence plot for the Figure 5.2.10: Convergence plot for the
four algorithms on SRD problem. four algorithms on WB problem.
1e10 Convergence plot for SPD problem 1e19 Convergence plot for WP problem
3.5 1.6
3.0 1.4
1.2
2.5
1.0
(hyper)volume
(hyper)volume
2.0
0.8
1.5
0.6
1.0 0.4
CEGO CEGO
0.5 MOGA 0.2 MOGA
NSGAII NSGAII
0.0 SPEA2 0.0 SPEA2
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200
Evaluation Evaluation
Figure 5.2.11: Convergence plot for the Figure 5.2.12: Convergence plot for the
four algorithms on SPD problem. four algorithms on WP problem.
36
Dredger Optimization Results Chapter 5. Experimental Evaluation
(hyper)volume
1.1 ORIGINAL 3400
1.0 3200
0.9 3000 CEGO
MOGA
0.8 NSGAII
2800
1800 2000 2200 2400 2600 2800 SPEA2
Tonnes
0 25 50 75 100 125 150 175 200
Evaluation
Figure 5.3.1: Original design and
Pareto frontier obtained by CEGO, Figure 5.3.2: Convergence plot for
MOGA, NSGAII and SPEA2 on the CEGO, MOGA, NSGAII and SPEA2 on
dredger optimization problem. the dredger optimization problem.
Figure 5.3.3: Original Dredger de- Figure 5.3.4: Dredger design opti-
sign by human experts. mized with CEGO.
As can be seen from the figures, the dredger found by CEGO is longer
compared to the original design. Typically when a ship is longer, more steel
is needed to fulfill the strength requirements of a ship. In this case the design
variation found by CEGO actually has less steel weight. This is caused by
a longer hopper which contributes to the overall strength of the ship. The
extra strength results in less required steel in the plates and profiles which
is needed to meet regulations concerning sagging and hogging.
Because the ship is longer and lighter, the ship has fewer draught and
has more floating capacity compared to the original. Due to less draught and
the longer design, the ship makes less waves and waves with a different wave
pattern which results in less resistance.
37
–The key is not to prioritize what’s on your schedule,
but to schedule your priorities.
Stephen Covey
6
Conclusion and Future Work
38
Future Work Chapter 6. Conclusion and Future Work
39
Bibliography
[2] Nicola Beume, Boris Naujoks, and Michael Emmerich. Sms-emoa: Multi-
objective selection based on dominated hypervolume. European Journal
of Operational Research, 181(3):1653–1669, 2007.
[4] Piya Chootinan and Anthony Chen. Constraint handling in genetic algo-
rithms using a gradient-based repair method. Computers & operations
research, 33(8):2263–2281, 2006.
[6] Carlos A Coello Coello, Gary B Lamont, David A Van Veldhuizen, et al.
Evolutionary algorithms for solving multi-objective problems, volume 5.
Springer, 2007.
40
BIBLIOGRAPHY BIBLIOGRAPHY
[11] Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and Tanaka Meyarivan.
A fast elitist non-dominated sorting genetic algorithm for multi-objective
optimization: NSGA-II. In International Conference on Parallel Prob-
lem Solving From Nature, pages 849–858. Springer, 2000.
[16] David E Goldberg and John H Holland. Genetic algorithms and machine
learning. Machine learning, 3(2):5, 1988.
41
BIBLIOGRAPHY BIBLIOGRAPHY
42
BIBLIOGRAPHY BIBLIOGRAPHY
[31] Michael JD Powell. A direct search optimization method that models the
objective and constraint functions by linear interpolation. In Advances
in optimization and numerical analysis, pages 51–67. Springer, 1994.
[37] Yusuke Tahara, Satoshi Tohyama, and Tokihiro Katsui. Cfd-based multi-
objective optimization method for ship design. International journal
for numerical methods in fluids, 52(5):499–527, 2006.
[39] Guolei Tang, Wenyuan Wang, Zijian Guo, Xuhui Yu, and Bingchang
Wang. Simulation-based optimization for generating the dimensions of
a dredged coastal entrance channel. Simulation, 90(9):1059–1070, 2014.
[40] Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Na-
garatnam Suganthan, and Qingfu Zhang. Multiobjective evolutionary
algorithms: A survey of the state of the art. Swarm and Evolutionary
Computation, 1(1):32–49, 2011.
43