Lsopt 21 Manual
Lsopt 21 Manual
Lsopt 21 Manual
June, 2003
Version 2
Copyright © 1999-2003
LIVERMORE SOFTWARE
TECHNOLOGY CORPORATION
All Rights Reserved
Mailing address:
Livermore Software Technology Corporation
2876 Waverley Way
Livermore, California 94550-1740
Support Address:
Livermore Software Technology Corporation
7374 Las Positas Road
Livermore, California 94550
FAX: 925-449-2507
TEL: 925-449-2500
EMAIL: sales@lstc.com
1. Introduction 1
2. Optimization Methodology 5
v
Maximin .............................................................................................................................................. 16
Centered L2-discrepancy..................................................................................................................... 16
2.6.6 Space-filling designs* ............................................................................................................. 17
Discussion of algorithms..................................................................................................................... 19
2.6.7 Random number generator ...................................................................................................... 20
vi LS-OPT Version 2
CONTENTS
3. Probabilistic Fundamentals 49
USER’S MANUAL........................................................................................................................................ 55
5.3.1 Names...................................................................................................................................... 68
5.3.6 Expressions.............................................................................................................................. 70
6. Program Execution 71
7.3 Preprocessors................................................................................................................................... 88
7.3.1 LS-INGRID............................................................................................................................. 88
8.2 Definition of upper and lower bounds of the design space ............................................................. 94
LS-OPT Version 2 ix
8.8 Worst-case design ........................................................................................................................... 97
10.4.2 Assigning design variable or response components to the composite ................................. 116
x LS-OPT Version 2
CONTENTS
10.7 Extracting Response Quantities From Binary Output: LS-DYNA ............................................... 123
10.9 Extracting data from the LS-DYNA Binout file ........................................................................... 128
LS-OPT Version 2 xi
12.5 Job distribution.............................................................................................................................. 145
EXAMPLES................................................................................................................................................. 173
17.1.4 Reducing the region of interest for further refinement ......................................................... 184
17.3.5 Response filtering: using the peak force as a constraint ....................................................... 217
17.6.2 MDO with D-optimal experimental design and SRSM ........................................................ 250
17.8 Knee impact with variable screening (11 variables) ..................................................................... 272
LS-OPT Version 2 xv
Appendix D .................................................................................................................................................. 321
Glossary........................................................................................................................................................ 339
Much of the later development at LSTC was influenced by industrial partners, particularly in the automotive
industry. Thanks are due to these partners for their cooperation and also for providing access to high-end
computing hardware.
At LSTC, the author wishes to give special thanks to colleague and co-developer Dr. Trent Eggleston.
Thanks are due to Mr. Mike Burger for setting up the examples.
Nielen Stander
Livermore, CA
August, 1999
Preface to Version 2
Version 2 of LS-OPT evolved from Version 1 and differs in many significant regards. These can be
summarized as follows:
As in the past, these developments have been influenced by industrial partners, particularly in the
automotive industry. Several developments were also contributed by Nely Fedorova and Serge Terekhoff of
SFTI. Invaluable research contributions have been made by Professor Larsgunnar Nilsson and his group in
the Mechanical Engineering Department at Linköping University, Sweden and by Professor Ken Craig’s
group in the Department of Mechanical Engineering at the University of Pretoria, South Africa. The authors
also wish to give special thanks to Mike Burger at LSTC for setting up further examples for Version 2.
This LS-OPT manual consists of three parts. In the first part, the Theoretical Manual (Chapter 2), the
theoretical background is given for the various features in LS-OPT. The next part is the User’s Manual
(Chapters 4 through 0), which guides the user in the use of LS-OPTui, the graphical user interface. These
chapters also describe the command language syntax. The final part of the manual is the Examples section
(Chapter 17), where eight examples illustrate the application of LS-OPT to a variety of practical
applications. Appendices contain interface features (Appendix A, Appendix B and Appendix C), database
file descriptions (Appendix D), a mathematical expression library (Appendix E), advanced theory
(Appendix F), a Glossary (Appendix G) and a Quick Reference Manual (Appendix H).
Features that are only available in Version 2.1 are indicated. The most important of these are (i) extraction
from the binout database, (ii) stochastic search, (iii) probabilistic modeling and (iv) Kriging.
1
2 LS-OPT Version 2
THEORETICAL MANUAL
2. Optimization Methodology
2.1 Introduction
In the conventional design approach, a design is improved by evaluating its response and making design
changes based on experience or intuition. This approach does not always lead to the desired result, that of a
‘best’ design, since design objectives are sometimes in conflict, and it is not always clear how to change the
design to achieve the best compromise of these objectives. A more systematic approach can be obtained by
using an inverse process of first specifying the criteria and then computing the ‘best’ design. The procedure
by which design criteria are incorporated as objectives and constraints into an optimization problem that is
then solved, is referred to as optimal design.
The state of computational methods and computer hardware has only recently advanced to the level where
complex nonlinear problems can be analyzed routinely. Many examples can be found in the simulation of
impact problems and manufacturing processes. The responses resulting from these time-dependent
processes are, as a result of behavioral instability, often highly sensitive to design changes. Program logic,
as for instance encountered in parallel programming or adaptivity, may cause spurious sensitivity. Roundoff
error may further aggravate these effects, which, if not properly addressed in an optimization method, could
obstruct the improvement of the design by way of corrupting the function gradients.
Among several methodologies available to address optimization in this design environment, response
surface methodology (RSM), a statistical method for constructing smooth approximations to functions in a
multi-dimensional space, has achieved prominence in recent years. Rather than relying on local information
such as a gradient only, RSM selects designs that are optimally distributed throughout the design space to
construct approximate surfaces or ‘design formulae’. Thus, the local effect caused by ‘noise’ is alleviated
and the method attempts to find a representation of the design response within a bounded design space or
smaller region of interest. This extraction of global information allows the designer to explore the design
space, using alternative design formulations. For instance, in vehicle design, the designer may decide to
investigate the effect of varying a mass constraint, while monitoring the crashworthiness responses of a
vehicle. The designer might also decide to constrain the crashworthiness response while minimizing or
maximizing any other criteria such as mass, ride comfort criteria, etc. These criteria can be weighted
differently according to importance and therefore the design space needs to be explored more widely.
Part of the challenge of developing a design program is that designers are not always able to clearly define
their design problem. In some cases, design criteria may be regulated by safety or other considerations and
therefore a response has to be constrained to a specific value. These can be easily defined as mathematical
constraint equations. In other cases, fixed criteria are not available but the designer knows whether the
5
CHAPTER 2: OPTIMIZATION METHODOLOGY
responses must be minimized or maximized. In vehicle design, for instance, crashworthiness can be
constrained because of regulation, while other parameters such as mass, cost and ride comfort can be treated
as objectives to be weighted according to importance. In these cases, the designer may have target values in
mind for the various response and/or design parameters, so that the objective formulation has to be
formulated to approximate the target values as closely as possible. Because the relative importance of
various criteria can be subjective, the ability to visualize the trade-off properties of one response vs. another
becomes important.
Trade-off curves are visual tools used to depict compromise properties where several important response
parameters are involved in the same design. They play an extremely important role in modern design where
design adjustments must be made accurately and rapidly. Design trade-off curves are constructed using the
principle of Pareto optimality. This implies that only those designs of which the improvement of one
response will necessarily result in the deterioration of any other response are represented. In this sense no
further improvement of a Pareto optimal design can be made: it is the best compromise. The designer still
has a choice of designs but the factor remaining is the subjective choice of which feature or criterion is more
important than another. Although this choice must ultimately be made by the designer, these curves can be
helpful in making such a decision. An example in vehicle design is the trade-off between mass (or energy
efficiency) and safety.
Adding to the complexity, is the fact that mechanical design is really an interdisciplinary process involving
a variety of modeling and analysis tools. To facilitate this process, and allow the designer to focus on
creativity and refinement, it is important to provide suitable interfacing utilities to integrate these design
tools. Designs are bound to become more complex due to the legislation of safety and energy efficiency as
well as commercial competition. It is therefore likely that in future an increasing number of disciplines will
have be integrated into a particular design. This approach of multidisciplinary design requires the designer
to run more than one case, often using more than one type of solver. For example, the design of a vehicle
may require the consideration of crashworthiness, ride comfort, noise level as well as durability. Moreover,
the crashworthiness analysis may require more than one analysis case, e.g. frontal and side impact. It is
therefore likely that as computers become more powerful, the integration of design tools will become more
commonplace, requiring a multidisciplinary design interface.
Modern architectures often feature multiple processors and all indications are that the demand for
distributed computing will strengthen into the future. This is causing a revolution in computing as single
analyses that took a number of days in the recent past can now be done within a few hours. Optimization,
and RSM in particular, lend themselves very well to being applied in distributed computing environments
because of the low level of message passing. Response surface methodology is efficiently handled, since
each design can be analyzed independently during a particular iteration. Needless to say, sequential methods
have a smaller advantage in distributed computing environments than global search methods such as RSM.
The present version of LS-OPT also features Monte Carlo based point selection schemes and optimization
methods. The respective relevance of stochastic and response surface based methods may be of interest. In a
pure response surface based method, the effect of the variables is distinguished from chance events while
Monte Carlo simulation is used to investigate the effect of these chance events. The two methods should be
used in a complimentary fashion rather than substituting the one for the other. In the case of events in which
chance plays a significant role, responses of design interest are often of a global nature (being averaged or
integrated over time). These responses are mainly deterministic in character. The full vehicle crash example
6 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
in this manual can attest to the deterministic qualities of intrusion and acceleration pulses. These types of
responses may be highly nonlinear and have random components due to uncontrollable noise variables, but
they are not random.
Stochastic methods have also been touted as design improvement methods. In a typical approach, the user
iteratively selects the best design results of successive stochastic simulations to improve the design. These
design methods, being dependent on chance, are generally not as efficient as response surface methods.
However, an iterative design improvement method based on stochastic simulation is available in LS-OPT.
Stochastic methods have an important purpose when conducted directly or on the surrogate (approximated)
design response in reliability based design optimization and robustness improvement. This methodology is
currently under development and will be available in future versions of LS-OPT.
Optimization can be defined as a procedure for “achieving the best outcome of a given operation while
satisfying certain restrictions” [19]. This objective has always been central to the design process, but is now
assuming greater significance than ever because of the maturity of mathematical and computational tools
available for design.
Mathematical and engineering optimization literature usually presents the above phrase in a standard form
as
min f ( x ) (2.1)
subject to
g j ( x ) ≤ 0 ; j = 1,2,K, m
and
hk ( x ) = 0 ; k = 1,2,K, l
where f, g and h are functions of independent variables x1, x2, x3, …, xn. The function f, referred to as the
cost or objective function, identifies the quantity to be minimized or maximized. The functions g and h are
constraint functions which represent the design restrictions. The variables collectively described by the
vector x are often referred to as design variables or design parameters.
The two sets of functions gj and hk define the constraints of the problem. The equality constraints do not
appear in any further formulations presented here because algorithmically each equality constraint can be
represented by two inequality constraints in which the upper and lower bounds are set to the same number,
e.g.
hk ( x ) = 0 ~ 0 ≤ hk ( x ) ≤ 0 (2.2)
Equations (2.1) then become
min f ( x ) (2.3)
subject to
g j ( x ) ≤ 0 ; j = 1,2,K, m
LS-OPT Version 2 7
CHAPTER 2: OPTIMIZATION METHODOLOGY
The necessary conditions for the solution x * to Eq. (2.3) are the Karush-Kuhn-Tucker optimality
conditions:
( ) ( )
∇f x * + λ T ∇g x * = 0 (2.4)
λ g (x ) = 0
T *
( )
g x* ≤ 0
λ ≥0.
These conditions are derived by differentiating the Lagrangian function of the constrained minimization
problem
L( x ) = f ( x ) + λ T g ( x ) (2.5)
and applying the conditions
∇ T f ∂x * ≥ 0 (optimality) (2.6)
and
∇ T g ∂x * ≤ 0 (feasibility) (2.7)
to a perturbation ∂x * .
λ j are the Lagrange multipliers which may be nonzero only if the corresponding constraint is active, i.e.
( )
g j x* = 0 .
These conditions are not used explicitly in LS-OPT and are not tested for at optima. They are more of
theoretical interest in this manual, although the user should be aware that some optimization algorithms are
based on these conditions.
8 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
in order to construct the local approximations. These gradients can be computed either analytically or
numerically. In order for gradient-based algorithms such as SQP to converge, the functions must be
continuous with continuous first derivatives.
Analytical differentiation requires the formulation and implementation of derivatives with respect to the
design variables in the simulation code. Because of the complexity of this task, analytical gradients (also
known as design sensitivities) are mostly not readily available.
Numerical differentiation is typically based on forward difference methods that require the evaluation of n
perturbed designs in addition to the current design. This is simple to implement but is expensive and
hazardous because of the presence of round-off error. As a result, it is difficult to choose the size of the
intervals of the design variables, without risking spurious derivatives (the interval is too small) or
inaccuracy (the interval is too large). Some discussion on the topic is presented in Reference [19].
As a result, gradient-based methods are typically only used where the simulations provide smooth
responses, such as linear structural analysis and certain types of nonlinear analysis.
In non-linear dynamic analysis such as the analysis of impact or metal-forming, the derivatives of the
response functions are mostly severely discontinuous. This is mainly due to the presence of friction and
contact. The response (and therefore the sensitivities) may also be highly nonlinear due to the chaotic nature
of impact phenomena and therefore the gradients may not reveal much of the overall behavior. Furthermore,
the accuracy of numerical sensitivity analysis may also be adversely affected by round-off error. Analytical
sensitivity analysis for friction and contact problems is a subject of current research.
It is mainly for the above reasons that researchers have resorted to global approximation methods for
smoothing the design response. The art and science of developing design approximations has been a popular
theme in design optimization research for decades (for a review of the various approaches, see e.g.
Reference [5] by Barthelemy). Barthelemy categorizes two main global approximation methods, namely
response surface methodology [11] and neural networks [23].
In the present implementation, the gradient vectors of general composites based on mathematical
expressions of the basic response surfaces are computed using numerical differentiation. A default interval
of 1/1000 of the size of the design space is used in the forward difference method.
L j ≤ g j ( x) ≤ U j ; j = 1,2,K, m (2.8)
LS-OPT Version 2 9
CHAPTER 2: OPTIMIZATION METHODOLOGY
Lj g j ( x) Uj
≤ ≤ ; j = 1,2,K, m (2.9)
g j ( x0 ) g j ( x0 ) g j ( x0 )
The design variables have been normalized internally by scaling the design space [xL ; xU] to [0;1], where xL
is the lower and xU the upper bound. The formula
xi − xiL
ξi = (2.10)
xiU − xiL
When using LS-OPT to minimize maximum violations, the responses must be normalized by the user. This
method is chosen to give the user the freedom in selecting the importance of different responses when e.g.
performing parameter identification. Section 2.15.3 will present this application in more detail.
Although inherently simple, the application of response surface methods to mechanical design has been
inhibited by the high cost of simulation and the large number of analyses required for many design
variables. In the quest for accuracy, increased hardware capacity has been consumed by greater modeling
detail and therefore optimization methods have remained largely on the periphery of the area of mechanical
design. In lieu of formal methods, designers have traditionally resorted to experience and intuition to
improve designs. This is seldom effective and also manually intensive. Moreover, design objectives are
often in conflict, making conventional methods difficult to apply, and therefore more analysts are
formalizing their design approach by using optimization.
10 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
Response Surface Methodology (or RSM) requires the analysis of a predetermined set of designs. A design
surface is fitted to the response values using regression analysis. Least squares approximations are
commonly used for this purpose. The response surfaces are then used to construct an approximate design
“subproblem” which can be optimized.
The response surface method relies on the fact that the set of designs on which it is based is well chosen.
Randomly chosen designs may cause an inaccurate surface to be constructed or even prevent the ability to
construct a surface at all. Because simulations are often time-consuming and may take days to run, the
overall efficiency of the design process relies heavily on the appropriate selection of a design set on which
to base the approximations. For the purpose of determining the individual designs, the theory of
experimental design (Design of Experiments or DOE) is required. Several experimental design criteria are
available but one of the most popular for an arbitrarily shaped design space is the D-optimality criterion.
This criterion has the flexibility of allowing any number of designs to be placed appropriately in a design
space with an irregular boundary. The understanding of the D-optimality criterion requires the formulation
of the least squares problem.
Consider a single response variable y dependent upon a number of variables x. The exact functional
relationship between these quantities is
y = η (x ) (2.11)
η ( x) ≈ f ( x) (2.12)
The constants a = [a1 , a 2 ,K, a L ] have to be determined in order to minimize the sum of the square error:
T
P P
L
∑ {[ y( x
p =1
p ) − f ( x p )] 2
} = ∑
p =1
[ y ( x p ) − ∑
i =1
aiφi ( x p )]2
(2.14)
P is the number of experimental points and y is the exact functional response at the experimental points xi.
a = ( X T X ) −1 X T y (2.15)
where X is the matrix
X = [ X ui ] = [φi ( x u )] (2.16)
LS-OPT Version 2 11
CHAPTER 2: OPTIMIZATION METHODOLOGY
The next critical step is to choose appropriate basis functions. A popular choice is the quadratic
approximation
φ = [1, x1 ,K, xn , x12 , x1 x2 ,K, x1 xn ,K, xn2 ]T (2.17)
but any suitable function can be chosen. LS-OPT allows linear, elliptical (linear and diagonal terms),
interaction (linear and off-diagonal terms) and quadratic functions.
• Design exploration
As design is a process, often requiring feedback and design modifications, designers are mostly
interested in suitable design formulae, rather than a specific design. If this can be achieved, and the
proper design parameters have been used, the design remains flexible and changes can still be made at a
late stage before verification of the final design. This also allows multidisciplinary design to proceed
with a smaller risk of having to repeat simulations. As designers are moving towards computational
prototyping, and as parallel computers or network computing are becoming more commonplace, the
paradigm of design exploration is becoming more important. Response surface methods can thus be
used for global exploration in a parallel computational setting. For instance, interactive trade-off studies
can be conducted.
• Global optimization
Response surfaces have a tendency to capture globally optimal regions because of their smoothness and
global approximation properties. Local minima caused by noisy response are thus avoided.
12 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
Neural network and Kriging approximations can also be used as response surfaces and are discussed in
Sections 2.10.1 and 2.10.2.
Factorial designs may be expensive to use directly, especially for a large number of design variables.
LS-OPT Version 2 13
CHAPTER 2: OPTIMIZATION METHODOLOGY
x1 x2 x3
0 0 0
1 0 0
0 1 0
0 0 1
− 1 0 0
0 − 1 0
0 0 − 1
1 1 0
1 0 1
0 1 1
This design uses the 2n factorial design, the center point, and the ‘face center’ points and therefore consists
of P = 2n + 2n + 1 experimental design points. For n = 3, the coordinates are:
x1 x2 x3
0 0 0
α 0 0
0 α 0
0 0 α
− α 0 0
0 −α 0
0 0 −α
− 1 − 1 − 1
1 − 1 − 1
−1 1 − 1
−1 −1 1
1 1 − 1
1 −1 1
−1 1 1
1 1 1
14 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
This method uses a subset of all the possible design points as a basis to solve
max X T X .
The subset is usually selected from an l n -factorial design where l is chosen a priori as the number of grid
points in any particular dimension. Design regions of irregular shape, and any number of experimental
points, can be considered [48]. The experiments are usually selected within a sub-region in the design space
thought to contain the optimum. A genetic algorithm is used to solve the resulting discrete maximization
problem. See References [43, 68].
The numbers of required experimental designs for linear as well as quadratic approximations are
summarized in the table below. The value for the D-optimality criterion is chosen to be 1.5 times the Koshal
design value plus one. This seems to be a good compromise between prediction accuracy and computational
cost [48]. The factorial design referred to below is based on a regular grid of 2n points (linear) or 3n points
(quadratic).
The Latin Hypercube design is a constrained random experimental design in which, for n points, the range
of each design variable is subdivided into n non-overlapping intervals on the basis of equal probability. One
value from each interval is then selected at random with respect to the probability density in the interval.
The n values of the first value are then paired randomly with the n values of variable 2. These n pairs are
LS-OPT Version 2 15
CHAPTER 2: OPTIMIZATION METHODOLOGY
then combined randomly with the n values of variable 3 to form n triplets, and so on, until k-tuplets are
formed.
Latin Hypercube designs are independent of the mathematical model of the approximation and allow
estimation of the main effects of all factors in the design in an unbiased manner. On each level of every
design variable only one point is placed. There are the same number of levels as points, and the levels are
assigned randomly to points. This method ensures that every variable is represented, no matter if the
response is dominated by only a few ones. Another advantage is that the number of points to be analyzed
can be directly defined. Let P denote the number of points, and n the number of design variables, each of
which is uniformly distributed between 0 and 1. Latin hypercube sampling (LHS) provides a P-by-n matrix
S = Sij that randomly samples the entire design space broken down into P equal-probability regions:
S ij = (η ij − ζ ij ) P , (2.20)
where η1 j ,K,η Pj are uniform random permutations of the integers 1 through P and ζ ij independent random
numbers uniformly distributed between 0 and 1. A common simplified version of LHS has centered points
of P equal-probability sub-intervals:
LHS can be thought of as a stratified Monte Carlo sampling. Latin hypercube samples look like random
scatter in any bivariate plot, though they are quite regular in each univariate plot. Often, in order to generate
an especially good space filling design, the Latin hypercube point selection S described above is taken as a
starting experimental design and then the values in each column of matrix S is permuted so as to optimize
some criterion. Several such criteria are described in the literature.
Maximin
One approach is to maximize the minimal distance between any two points (i.e. between any two rows of
S). This optimization could be performed using, for example, Simulated Annealing (see Appendix F). The
maximin strategy would ensure that no two points are too close to each other. For small P, maximin distance
designs will generally lie on the exterior of the design space and fill in the interior as P becomes larger. See
Section 2.6.6 for more detail.
Centered L2-discrepancy
Another strategy is to minimize the centered L2-discrepancy measure. The discrepancy is a quantitative
measure of non-uniformity of the design points on an experimental domain. Intuitively, for a uniformly
distributed set in the n-dimensional cube I n = [0,1]n, we would expect the same number of points to be in
all subsets of I n having the same volume. Discrepancy is defined by considering the number of points in
the subsets of I n . Centered L2 (CL2) takes into account not only the uniformity of the design points over
the n-dimensional box region I n , but also the uniformity of all the projections of points over lower-
dimensional subspaces:
16 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
2 S ij −0.5 (S ij −0.5)2
CL =(13 12) − ∑ ∏ 1+
2 n
−
P i =1KP j =1Kn
2
2 2
(2.22)
1 S kj −0.5 S ij −0.5 S kj −S ij
+ ∑ ∑ ∏ 1+ + − .
P2 k =1KPi =1KP j =1Kn 2 2 2
In the modeling of an unknown nonlinear relationship, when there is no persuasive parametric regression
model available, and the constraints are uncertain, one might believe that a good experimental design is a set
of points that are uniformly scattered on the experimental domain (design space). Space-filling designs
impose no strong assumptions on the approximation model, and allow a large number of levels for each
variable with a moderate number of experimental points. These designs are especially useful in conjunction
with nonparametric models such as neural networks (feed-forward networks, radial basis functions) and
Kriging, [74, 77]. Space-filling points can be also submitted as the basis set for constructing an optimal (D-
Optimal, etc.) design for a particular model (e.g. polynomial). Some space-filling designs are: random Latin
Hypercube Sampling (LHS), Orthogonal Arrays, and Orthogonal Latin Hypercubes.
The key to space-filling experimental designs is in generating 'good' random points and achieving
reasonably uniform coverage of sampled volume for a given (user-specified) number of points. In practice,
however, we can only generate finite pseudorandom sequences, which, particularly in higher dimensions,
can lead to a clustering of points, which limits their uniformity. To find a good space-filling design is a
nonlinear programming hard problem, which – from a theoretical point of view – is difficult to solve
exactly. This problem, however, has a representation, which might be within the reach of currently available
tools. To reduce the search time and still generate good designs, the popular approach is to restrict the
search within a subset of the general space-filling designs. This subset typically has some good 'built-in'
properties with respect to the uniformity of a design.
The constrained randomization method termed Latin Hypercube Sampling (LHS) and proposed in [37], has
become a popular strategy to generate points on the 'box' (hypercube) design region. The method implies
that on each level of every design variable only one point is placed, and the number of levels is the same as
the number of runs. The levels are assigned to runs either randomly or so as to optimize some criterion, e.g.
so that the minimal distance between any two design points is maximized ('maximin distance' criterion).
Restricting the design in this way tends to produce better Latin Hypercubes. However, the computational
cost of obtaining these designs is high. In multidimensional problems, the search for an optimal Latin
hypercube design using traditional deterministic methods (e.g. the optimization algorithm described in [45])
may be computationally prohibitive. This situation motivates the search for alternatives.
Probabilistic search techniques, simulated annealing and genetic algorithms are attractive heuristics for
approximating the solution to a wide range of optimization problems. In particular, these techniques are
frequently used to solve combinatorial optimization problems, such as the traveling salesman problem.
Morris and Mitchell [42] adopted the simulated annealing algorithm to search for optimal Latin hypercube
designs.
LS-OPT Version 2 17
CHAPTER 2: OPTIMIZATION METHODOLOGY
In LS-OPT, space-filling designs can be useful for constructing experimental designs for the following
purposes:
1. The generation of basis points for the D-optimality criterion. This avoids the necessity to create a
very large number of basis points using e.g. the full factorial design for large n. E.g. for n=20 and 3
points per variable, the number of points = 320 ≈ 3.5*109.
2. The generation of design points for all approximation types, but especially for neural networks and
Kriging.
3. The augmentation of an existing experimental design. This means that points can be added for each
iteration while maintaining uniformity and equidistance with respect to pre-existing points.
LS-OPT contains 6 algorithms to generate space-filling designs (see Table 2-2). Only Algorithm 5 has been
made available in the graphical interface. LS-OPTui.
18 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
Discussion of algorithms
The Maximin distance space-filling algorithms 3, 4 and 5 minimize the energy function defined as the
negative minimal distance between any two design points. Theoretically, any function that is a metric can be
used to measure distances between points, although in practice the Euclidean metric is usually employed.
The three algorithms, 3, 4 and 5, differ in their selection of random Simulated Annealing moves from one
state to a neighboring state. For algorithm 3, the next design is always a 'central point' LHS design (Eq.
2.21). The algorithm swaps two elements of I, Sij and Skj, where i and k are random integers from 1 to N, and
j is a random integer from 1 to n. Each step of algorithm 4 makes small random changes to a LHS design
point preserving the initial LHS structure. Algorithm 5 transforms the design completely randomly - one
point at a time. In more technical terms, algorithms 4 and 5 generate a candidate state, S ′ , by modifying a
randomly chosen element Sij of the current design, S, according to:
S ij′ = S ij + ξ (2.23)
where ξ is a random number sampled from a normal distribution with zero mean and standard deviation
σξ ∈ [σmin, σmax]. In algorithm 4 it is required that both S ij′ and S ij in Eq. (2.23) belong to the same Latin
hypercube subinterval.
Notice that maximin distance energy function does not need to be completely recalculated for every iterative
step of simulated annealing. The perturbation in the design applies only to some of the rows and columns of
S. After each step we can recompute only those nearest neighbor distances that are affected by the stepping
procedures described above. This reduces the calculation and increased the speed of the algorithm.
To perform an annealing run for the algorithms 3, 4 and 5, the values for Tmax and Tmin can be adapted to the
scale of the objective function according to:
LS-OPT Version 2 19
CHAPTER 2: OPTIMIZATION METHODOLOGY
where ∆E > 0 is the average value of |E'-E| observed in a short preliminary run of simulated annealing and
Tmax and Tmin are positive parameters.
The basic parameters that control the simulated annealing in algorithms 3, 4 and 5 can be summarized as
follows:
1 Energy function: negative minimal distance between any two points in the design.
3 Scalar parameters:
A portable random number generator is used for the construction of D-optimal designs (using the genetic
algorithm) and to generate Monte Carlo designs such as the Latin Hypercube. The algorithm employs a
scheme by Bays and Durham, as described in Knuth [29], and resembles the Maclaren-Marsaglia method. It
greatly increases the period of the basic linear congruential generator.
In certain applications the Mersenne Twister is used. The Mersenne Twister (MT19937) is a pseudorandom
number generator developed by Matsumoto and Nishimura [37] and has the merit that it has a far longer
period and far higher order of equidistribution than any other implemented generators. It has been proved
that the period is 219937-1, and a 623-dimensional equidistribution property is assured. LS-OPT will probably
standardize on this generator in future versions.
20 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
design space. Therefore, once the first approximation has been established, all the designs will be contained
in the new region of interest. This region of interest is thus defined by approximate bounds.
One way of establishing a reasonable set of designs is to move the points of the basis experimental design to
the boundaries of the reasonable design space in straight lines connecting to the central design xc so that
x ′ = x c + α ( x − xc ) (2.25)
This step may cause near duplicity of design points that can be addressed by removing points from the set
that are closer than a fixed fraction (typically 5%) of the design space size.
The D-optimality criterion is then used to attempt to find a well-conditioned design from the basis set of
experiments in the reasonable design space. Using the above approach, a poor distribution of the basis
points may result in a poorly designed subset.
For the predicted response ŷi and the actual response yi, this error is expressed as
P
ε 2 = ∑ ( yi − yˆ i )
2
(2.26)
i =1
If applied only to the regression points, this error measure is not very meaningful unless the design space is
oversampled. E.g. ε = 0 if the number of points P equals the number of basis functions L in the
approximation.
The residual sum-of-squares is sometimes used in its square root form, ε RMS , and called the “RMS error”:
LS-OPT Version 2 21
CHAPTER 2: OPTIMIZATION METHODOLOGY
1 P
ε RMS = ∑
P i =1
( yi − yˆ i )2 (2.27)
This is the maximum residual considered over all the design points and is given by
ε max = max yi − yˆ i . (2.28)
The same as the RMS error, but using only responses at preselected prediction points independent of the
regression points. This error measure is an objective measure of the prediction accuracy of the response
surface since it is independent of the number of construction points. It is important to know that the choice
of a larger number of construction points will, for smooth problems, diminish the prediction error.
X
Xa (xp ) = (2.29)
A( x p )
and solving
max X aT X a = max X T X + AT A (2.30)
for xp.
The prediction sum of squares residual (PRESS) uses each possible subset of P – 1 responses as a regression
data set, and the remaining response in turn is used to form a prediction set [43]. PRESS can be computed
from a single regression analysis of all P points.
2
y − yˆ i
P
PRESS = ∑ i (2.31)
i =1 1 − hii
H is the “hat” matrix, the matrix that maps the observed responses to the fitted responses, i.e.
yˆ = Hy (2.33)
The PRESS residual can also be written in its square root form
22 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
2
y − yˆ i
P
SPRESS = ∑ i . (2.34)
i =1 1 − hii
For a saturated design, H equals the unit matrix I so that the PRESS indicator becomes undefined.
∑ ( yˆ − yi )
2
i
R2 = i =1
P
(2.35)
∑ (y − yi )
2
i
i =1
where P is the number of design points and y , ŷi and yi represent the mean of the responses, the predicted
response, and the actual response, respectively. This indicator, which varies between 0 and 1, represents the
ability of the response surface to identify the variability of the design response. A low value of R2 usually
means that the region of interest is either too large or too small and that the gradients are not trustworthy.
The value of 1.0 for R2 indicates a perfect fit. However the value will not warn against an overfitted model
with poor prediction capabilities.
In an iterative scheme with a shrinking region the R2 value tends to be small at the beginning, then
approaches unity as the region of interest shrinks, thereby improving the modeling ability. It may then
reduce again as the noise starts to dominate in a small region causing the variability to become
LS-OPT Version 2 23
CHAPTER 2: OPTIMIZATION METHODOLOGY
indistinguishable. In the same progression, the prediction error will diminish as the modeling error fades,
but will stabilize at above zero as the modeling error is replaced by the random error (noise).
2.9 ANOVA
Since the number of regression coefficients determines the number of simulation runs, it is important to
remove those coefficients or variables which have small contributions to the design model. This can be done
by doing a preliminary study involving a design of experiments and regression analysis. The statistical
results are used in an analysis of variance (ANOVA) to rank the variables for screening purposes. The
procedure requires a single iteration using polynomial regression, but results are produced after every
iteration of a normal optimization procedure.
The 100(1 – α)% confidence interval for the regression coefficients b j , j = 0,1,K, L is determined by the
inequality
∆b j ∆b j
bj − ≤ β j ≤ bj + (2.38)
2 2
where
∆b j (α )=2tα / 2, P − L σˆ 2 C jj (2.39)
∑
P
ε2 ( yi − yˆ i ) 2
σˆ =
2
= i =1
(2.40)
P−L P−L
The contribution of a single regressor variable to the model can also be investigated. This is done by means
of the partial F-test where F is calculated to be
[ε reduced
2
− ε complete
2
] r
F= (2.41)
ε complete
2
( P − L)
24 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
where r = 1 and the reduced model is the one in which the regressor variable in question has been removed.
Each of the ε 2 terms represents the sum of squared residuals for the reduced and complete models
respectively.
It turns out that the computation can be done without analyzing a reduced model by computing
b 2j C jj
F= . (2.42)
ε complete
2
( P − L)
F can be compared with the F-statistic Fα,1,P-L so that if F > Fα,1,P-L, βj is non-zero with (100 – α)%
confidence. The confidence level α that βj is not zero can also be determined by computing the α for
F = Fα,1,P-L. The importance of βj is therefore estimated by both the magnitude of bj as well as the level of
confidence in a non-zero βj.
The significance of regressor variables may be represented by a bar chart of the magnitudes of the
coefficients bj with an error bar of length 2∆b j (α ) for each coefficient representing the confidence interval
for a given level of confidence α. The relative bar lengths allow the analyst to estimate the importance of
the variables and terms to be included in the model while the error bars represent the contribution to noise or
poorness of fit by the variable.
All terms have been normalized to the size of the design space so that the choice of units becomes irrelevant
and a reasonable comparison can be made for variables of different kinds, e.g. sizing and shape variables or
different material constants.
1
Available in Version 2.1
LS-OPT Version 2 25
CHAPTER 2: OPTIMIZATION METHODOLOGY
When using polynomials, the user is faced with the choice of deciding which monomial terms to include. In
addition, polynomials, by way of their nature as Taylor series approximations, are not natural for the
creation of updateable surfaces. This means that if an existing set of point data is augmented by a number of
new points which have been selected in a local subregion (e.g. in the vicinity of a predicted optimum), better
information could be gained from a more flexible type of approximation that will keep global validity while
allowing refinement in a subregion of the parameter space. Such an approximation provides a more natural
approach for combining the results of successive iterations.
Neural methods are natural extensions and generalizations of regression methods. Neural networks have
been known since the 1940’s, but it took the dramatic improvements in computers to make them practical,
[8]. Neural networks - just like regression techniques - model relationships between a set of input variables
and an outcome. They can be thought of as computing devices consisting of numerical units (neurons),
whose inputs and outputs are linked according to specific topologies. A neural model is defined by its free
parameters - the inter-neuron connection strengths (weights) and biases. These parameters are typically
learned from the training data by some appropriate optimization algorithm. The training set consists of pairs
of input (design) vectors and associated outputs (responses). The training algorithm tries to steer network
parameters towards minimizing some distance measure, typically the mean squared error (MSE) of the
model computed on the training data.
Several factors determine the predictive accuracy of a neural network approximation and, if not properly
addressed, may adversely affect the solution. For a neural network, as well as for any other data-derived
model, the most critical factor is the quality of training data. In practical cases, we are limited to a given
data set, and the central problem is that of not enough data. The minimal number of data points required for
network training is related to the (unknown) complexity of the underlying function and the dimensionality
of the design space. In reality, the more design variables, the more training samples are required. In the
statistical and neural network literature this problem is known as the ’curse of dimensionality’. Most forms
of neural networks (in particular, feed-forward networks) actually suffer less from the curse of
dimensionality than some other methods, as they can concentrate on a lower-dimensional section of the
high-dimensional space. For example, by setting the outgoing weights from a particular input to zero, a
network can entirely ignore that input (Figure 2-2). Nevertheless, the curse of dimensionality is still a
problem, and the performance of a network can certainly be improved by eliminating unnecessary input
variables.
26 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
Figure 2-2: Schematic of a neural network with 2 inputs and a hidden layer of 4 neurons with activation
function f
It is clear that if the number of network free parameters is sufficiently large and the training optimization
algorithm is run long enough, it is possible to drive the training MSE error as close as one likes to zero.
However, it is also clear that driving MSE all the way to zero is not a desirable thing to do. For noisy data,
this may indicate over-fitting rather than good modeling. For highly discrepant training data, zero MSE
makes no sense at all. Regularization means that some constraints are applied to the construction of the
neural model with the goal of reducing the generalization error, that is, the ability to predict (interpolate)
the unobserved response for new data points that are generated by a similar mechanism as the observed data.
A fundamental problem in modeling noisy and/or incomplete data, is to balance the ’tightness’ of the
constraints with the ’goodness of fit’ to the observed data. This tradeoff is called the bias-variance tradeoff
in the statistical literature.
Figure 2-3: Sigmoid transfer function y = 1 / (1 + e − x ) typically used with feed-forward networks
LS-OPT Version 2 27
CHAPTER 2: OPTIMIZATION METHODOLOGY
A multi-layer feed-forward network and a radial basis function network are two of the most common neural
architectures used for approximating functions. Networks of both types have a distinct layered topology in
the sense that their processing units (’neurons’) are divided into several groups (’layers’), the outputs of
each layer of neurons being the inputs to the next layer (Figure 2-2). In a feed-forward network, each neuron
performs a biased weighted sum of their inputs and passes this value through a transfer (activation) function
to produce the output. Activation function of intermediate (’hidden’) layers is generally a sigmoidal function
(Figure 2-3), while network input and output layers are usually linear (transparent). In theory, such networks
can model functions of almost arbitrary complexity, see [25, 73]. All of the parameters in a feed-forward
network are usually determined at the same time as part of a single (non-linear) optimization strategy based
on the standard gradient algorithms (the steepest descent, RPROP, Levenberg-Marquardt, etc.). The gradient
information is typically obtained using a technique called backpropagation, which is known to be
computationally effective [51]. For feed-forward networks, regularization may be done by controlling the
number of network weights (’model selection’), by imposing penalties on the weights (’ridge regression’),
or by various combinations of these strategies.
Nature is rarely (if ever) perfectly predictable. Real data never exactly fit the model that is being used. One
must take into consideration that the prediction errors not only come from the variance error due to the
intrinsic noise and unreliability in the measurement of the dependent variables but also from the systematic
(bias) error due to model miss-specification. According to George E.P. Box’s famous maxim, ”all models
are wrong, some are useful”. To be genuinely useful, a fitting procedure should provide the means to assess
whether or not the model is appropriate and to test the goodness-of-fit against some statistical standard.
There are several error measures available to determine the accuracy of the model. Among them are:
P
MSE = ∑ ( yˆ i − yi ) / P,
2
(2.43)
i
RMS = MSE
MSE
nMSE = 2
σˆ
RMS
nRMS =
σˆ 2
P
∑ ( yˆ − yi )
2
i
R2 = i =1
P
∑ (y − yi )
2
i
i =1
P
∑ yˆ i − yˆ i yi − yi
R= i =1
∑ (yˆ ) ∑(y − y )
P P
2 2
i − yˆ i i i
i =1 i =1
where
28 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
P denotes the number of data points, yi is the observed response value (’target value’), ŷi is the model’s
prediction of response, ŷ is the mean (average) value of ŷ , y is the mean (average) value of y, and σˆ 2 is
given by
∑ ( yi − yˆ i )2
P
ε2
σˆ =
2
= i =1
(2.44)
P−L P−L
Mean squared error (MSE for short) and root mean squared error (RMS) summarize the overall model error.
Unique or rare large error values can affect these indicators. Sometimes, MSE and RMS measures are
normalized with sample variance of the target value (see formulae for nMSE and nRMS) to allow for
comparisons between different datasets and underlying functions. R2 and R are relative measures. The
coefficient of multiple determination R2 (’R-square’) is explained variance relative to total variance in the
target value. This indicator is widely used in linear regression analysis. R2 represents the amount of response
variability explained by the model. R is the correlation coefficient between the network response and the
target. It is a measure of how well the variation in the output is explained by the targets. If this number is
equal to 1, then there is a perfect correlation between targets and outputs. Outliers can greatly affect the
magnitudes of correlation coefficients. Of course, the larger the sample size, the smaller is the impact of one
or two outliers.
Training accuracy measures (MSE, RMS, R2, R, etc.) are computed along all the data points used for
training. As mentioned above, the performance of a good model on the training set does not necessarily
mean good prediction of new (unseen) data. The objective measures of the prediction accuracy of the model
are test errors computed along independent testing points (i.e. not training points). This is certainly true
provided that we have an infinite number of testing points. In practice, however, test indicators are usable,
only if treated with appropriate caution. Actual problems are often characterized by the limited availability
of data, and when the training datasets are quite small and the test sets are even smaller, only quite large
differences in performance can be reliably discerned by comparing training and test indicators.
The generalized cross-validation (GCV) [71] and Akaike’s final prediction error (FPE) [1] provide
computationally feasible means of estimating the appropriateness of the model.
GCV and FPE estimates combine the training MSE with a measure of the model complexity:
ν
MSEGCV = MSE /(1 − )2 , (2.45)
P
RMS GCV = MSEGCV
MSEGCV
nMSEGCV =
σˆ 2
RMS GCV
nRMS GCV =
σˆ 2
LS-OPT Version 2 29
CHAPTER 2: OPTIMIZATION METHODOLOGY
In theory, GCV estimates should be related to ν. As a very rough approximation to ν, we can assume that all
of the network free parameters are well determined so that ν = M, where M is the total number of network
weights and biases. This is what we would expect to be the case for large P so that P >> M. Note that GCV
is undefined when ν is equal to the number of training points (P).
Feed-forward (FF) neural networks have a distinct layered topology. Each unit performs a biased weighted
sum of their inputs and passes this value through a transfer (activation) function to produce the output. The
outputs of each layer of neurons are the inputs to the next layer. In a feed-forward network, the activation
function of intermediate (’hidden’) layers is generally a sigmoidal function (Figure 2-3), network input and
output layers being linear. Consider a FF network with K inputs, one hidden layer with H sigmoid units and
a linear output unit. For a given input vector x = ( x1 ,K, x K ) and network weights
W = (W0 ,W1 ,K,WH ,W10 ,W11 ,K,WHK ) , the output of the network is:
H K
yˆ ( x ,W ) = W0 + ∑Wh f (Wh 0 + ∑Whk x k ), (2.46)
h =1 k =1
where
1
f ( x) =
1 + e−x
The computational graph of Eq. (2.46) is shown schematically in Figure 2-2. The extension to the case of
more than one hidden layers can be obtained accordingly. It is straightforward to show that the derivative of
the network Eq. (2.46) with respect to any of its inputs is given by:
∂yˆ H H
= ∑WhWhk f ′(W0 + ∑Wh ), k = 1,K, K . (2.47)
∂xk h=1 h =1
Neural networks have been mathematically shown to be universal approximators of continuous functions
and their derivatives (on compact sets) [25]. In other words, when a network (Eq. (2.46)) converges towards
the underlying function, all the derivatives of the network converge towards the derivatives of this function.
Standard non-linear optimization techniques including a variety of gradient algorithms (the steepest descent,
RPROP, Levenberg-Marquardt, etc.) are applied to adjust FF network’s weights and biases. For neural
networks, the gradients are easily obtained using a chain rule technique called ’backpropagation’ [51]. The
second-order Levenberg-Marquardt algorithm appears to be the fastest method for training moderate-sized
FF neural networks (up to several hundred adjustable weights) [8]. However, when training larger networks,
the first-order RPROP algorithm becomes preferable for computational reasons [46].
Regularization: For FF networks, regularization may be done by controlling the number of network weights
(’model selection’), by imposing penalties on the weights (’ridge regression’), or by various combinations of
these strategies. Model selection requires choosing the number of hidden units and, sometimes, the number
of network hidden layers. Most straightforward is to search for an ’optimal’ network architecture that
minimizes MSEGCV, MSEFPE or MSECV–k. Often, it is feasible to loop over 1,2,... hidden units and finally
30 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
select the network with the smallest GCV error. In any event, in order for the GCV measure to be
applicable, the number of training points P should not be too small compared to the required network size
M.
Over-fitting: To prevent over-fitting, it is always desirable to find neural solutions with the smallest number
of parameters. In practice, however, networks with a very parsimonious number of weights are often hard to
train. The addition of extra parameters (i.e. degrees of freedom) can aid convergence and decrease the
chance of becoming stuck in local minima or on plateaus [32]. Weight decay regularization involves
modifying the performance function F , which is normally chosen to be the mean sum of squares of the
network errors on the training set (Eq. (2.43)). When minimizing MSE (Eq. (2.43)) the weight estimates
tend to be exaggerated. We can impose a penalty for this tendency by adding a term that consists of the sum
of squares of the network weights (see also Eq. (2.43)):
F = βE D + αEW (2.48)
where
∑
P
( yˆ i − yi ) 2
ED = i =1
,
2
∑
M
m =1
Wm2
EW = ,
2
where M is the number of weights and P the number of points in the training set.
Notice that network biases are usually excluded from the penalty term EW. Using the modified performance
function (Eq. (2.48)) will cause the network to have smaller weights, and this will force the network
response to be smoother and less likely to overfit. This eliminates the guesswork required in determining the
optimum network size. Unfortunately, finding the optimal value for α and β is not a trivial task. If we make
α /β too small, we may get over-fitting. If α /β is too large, the network will not adequately fit the training
data. A rule of thumb is that a little regularization usually helps [52]. It is important that weight decay
regularization does not require that a validation subset be separated out of the training data. It uses all of the
data. This advantage is especially noticeable in small sample size situations. Another nice property of
weight decay regularization is that it can lend numerical robustness to the Levenberg-Marquardt algorithm.
The L-M approximation to the Hessian of Eq. (2.48) is moved further away from singularity due to a
positive addend to its diagonal:
A = H + αI (2.49)
where
P
H = β∇∇E D ≈∑ g ( x (i ) )⋅g ( x (i ) ) T
i =1
∂yˆ ∂yˆ T
g( x) = ( , K, )
∂W1 ∂WM
In [8, 18, 39 and 40] the Bayesian (’evidence framework’ or ’type II maximum likelihood’) approach to
regularization is discussed. The Bayesian re-estimation algorithm is formulated as follows. At first, we
choose the initial values for α and β. Then, a neural network is trained using a standard non-linear
LS-OPT Version 2 31
CHAPTER 2: OPTIMIZATION METHODOLOGY
optimization algorithm to minimize the error function (Eq. (2.48)). After training, i.e. in the minimum of Eq.
(2.48), the values for α and β are re-estimated, and training restarts with the new performance function.
Regularization hyperparameters are computed in a sequence of 3 steps:
∑
M
λm
ν= m =1
(2.50)
λm + α
where λm, m = 1,…,M are (positive) eigenvalues of matrix H in Eq. (2.49), ν is the estimate of the effective
number of parameters of a neural network,
ν
α= (2.51)
2 EW
P −ν
β=
2ED
It should be noted that the algorithm (Eqs. (2.50) and (2.51)) relies on numerous simplifications and
assumptions, which hold only approximately in typical real-world problems [13]. In the Bayesian formalism
a trained network is described in terms of the posterior probability distribution of weight values. The
method typically assumes a simple Gaussian prior distribution of weights governed by an inverse variance
hyperparameter α = 1 / σ weights
2
. If we present a new input vector to such a network, then the distribution of
weights gives rise to a distribution of network outputs. There will be also an addend to the output
distribution arising from the assumed σ noise
2
= 1 / β Gaussian noise on the output variables:
y = y ( x) + N (0,σ noise
2
). (2.52)
With these assumptions, the negative log likelihood of network weights W given P training points
x(1), … , x(P) is proportional to MSE (Eq. (2.46)), i.e., the maximum likelihood estimate for W is that
which minimizes (Eq. (2.46)) or, equivalently, ED. In order for Bayes estimates of α and β to do a good job
of minimizing the generalization in practice, it is usually necessary that the priors on which they are based
are realistic. The Bayesian formalism also allows us to calculate error bars on the network outputs, instead
of just providing a single ’best guess’ output ŷ . Given an unbiased model, minimization of the performance
function (Eq. (2.46)) amounts to minimizing the variance of the model. The estimate for output variance
σ y2ˆ x of the network at a particular point x is given by:
σ y2ˆ x ≈ g ( x ) T A −1 g ( x ) (2.53)
Equation (2.53) is based on a second-order Taylor series expansion of Eq. (2.48) around its minimum and
assumes that ∂ŷ ∂W is locally linear.
32 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
2.10.2 Kriging*
Kriging is named after D.G. Krige [32], who applied empirical methods for determining true ore grade
distributions from distributions based on sampled ore grades. In recent years, the Kriging method has found
wider application as a spatial prediction method in engineering design. Detailed mathematical formulations
of Kriging are given by Simpson [56] and Bakker [5].
where y is the unknown function of interest, f(x) is a known polynomial and Z(x) the stochastic component
with mean zero and covariance:
Cov[Z(xi),Z(xj)] = σ 2R([R(xi,xj)]).
With L the number of sampling points, R is the L x L correlation matrix with R(xi,xj) the correlation
function between data points xi and xj. R is symmetric positive definite with unit diagonal.
n
− Θ k |d k |
Exponential: R= ∏e and
k =1
n
−Θ k d k2
Gaussian: R=∏e
k =1
where n is the number of variables and dk = xki – xk j, the distance between the kth components of points xi
and xj . There are n unknown θ -values to be determined. The default function in LS-OPT is Gaussian.
Once the correlation function has been selected, the predicted esitimate of the response ŷ(x) is given by:
^ ^
ŷ = β + rT(x)R-1(y-f β )
where rT(x) is the correlation vector (length L) between a prediction point x and the L sampling points, y
represents the responses at the L points and f is an L-vector of ones (in the case that f(x) is taken as a
^
constant). The vector r and scalar β are given by:
rT(x) = [R(x,x1),R(x,x2),…,R(x,xL)] T
^
β = (f TR -1f)-1f TR -1y.
LS-OPT Version 2 33
CHAPTER 2: OPTIMIZATION METHODOLOGY
^ ^
^ 2(y −f β ) T R −1 (y −f β )
σ = .
L
The maximum likelihood estimates for Θ k , k = 1,…, n can be found by solving the following constrained
maximization problem:
^ 2
−[ Lln(σ )+ln|R|]
Max Φ(Θ)= , subject to Θ>0 .
2
^
where both σ and |R| are functions of Θ . This is the same as minimizing
^ 2 1
σ |R| n , s.t. Θ>0
This optimization problem is solved using the LFOPC algorithm (Section 2.11). Because of the possible ill-
conditioning of R, a small constant number is adaptively added to its diagonal during optimization. The net
effect is that the approximating functions no longer interpolate the observed response values exactly.
However, these observations are still closely approximated.
There is little doubt that the polynomial-based response surfaces are the most robust. A negative aspect is
the fact that the user has to choose the order of the polynomial and a greater possibility exists for bias error
of a nonlinear response. Therefore linear approximations may only be useful within a certain subregion and
quadratic polynomials may be required for greater global accuracy. However the linear SRSM method has
proved to be excellent for optimization and can be used with confidence [61,62,63].
Neural Networks function well as global approximations and no serious deficiencies have been observed
when used as prescribed in Section [4.5]. NN’s have been used successfully for optimization [63] and can
be updated during the process.
Although the literature seems to indicate that Kriging is one of the more accurate methods [52], there is
evidence of Kriging having fitting problems with certain types of experimental designs [74]. Kriging is very
sensitive to noise, since it interpolates the data [26]. The authors of this manual have also experienced fitting
problems with non-smooth surfaces (Z(x) peaked at data points) in some cases, apparently due to large
values of Θ that may be due to local optima of the maximum likelihood function. The model construction
can be very time consuming [26] (also experienced with LS-OPT). Furthermore, the slight global altering of
the Kriging surface due to local updating has also been observed [63].
34 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
Reference [63] compares the use of the three metamodeling techniques for crashworthiness optimization.
This paper, which incorporates three case studies in crashworthiness optimization, concludes that while
RSM, NN and Kriging were similar in performance, RSM and NN are the most robust for this application.
The original leap-frog method [59] for unconstrained minimization problems seeks the minimum of a
function of n variables by considering the associated dynamic problem of a particle of unit mass in an
n-dimensional conservative force field, in which the potential energy of the particle at point x(t) at time t is
taken to be the function f(x) to be minimized.
The solution to the unconstrained problem may be approximated by applying the unconstrained
minimization algorithm to a penalty function formulation of the original algorithm.
The LFOPC algorithm uses a penalty function formulation to incorporate constraints into the optimization
problem. This implies that when constraints are violated (active), the violation is magnified and added to an
augmented objective function, which is solved by the gradient-based dynamic leap-frog method (LFOP).
The algorithm uses three phases: Phase 0, Phase 1 and Phase 2. In Phase 0, the active constraints are
introduced as mild penalties through the pre-multiplication of a moderate penalty parameter value. This
allows for the solution of the penalty function formulation where the violation of the (active) constraints are
premultiplied by the penalty value and added to the objective function in the minimization process. After the
solution of Phase 0 through the leap-frog dynamic trajectory method, some violations of the constraints are
inevitable because of the moderate penalty. In the subsequent Phase 1, the penalty parameter is increased to
more strictly penalize violations of the remaining active constraints. Finally, and only if the number of
active constraints exceed the number of design variables, a compromised solution is found to the
optimization problem in Phase 2. Otherwise, the solution terminates having reached convergence in Phase 1.
The penalty parameters have default values as listed in the User’s manual (Section 16.3). In addition, the
step size of the algorithm and termination criteria of the subproblem solver are listed.
The values of the responses are scaled with the values at the initial design. The variables are scaled
internally by scaling the design space to the [0; 1] interval. The default parameters in LFOPC (as listed in
Section 16.3) should therefore be adequate. The termination criteria are also listed in Section 16.3.
In the case of an infeasible optimization problem, the solver will find the most feasible design within the
given region of interest bounded by the simple upper and lower bounds. A global solution is attempted by
multiple starts from the experimental design points.
LS-OPT Version 2 35
CHAPTER 2: OPTIMIZATION METHODOLOGY
The SRSM method [61] uses a region of interest, a subspace of the design space, to determine an
approximate optimum. A range is chosen for each variable to determine its initial size. A new region of
interest centers on each successive optimum. Progress is made by moving the center of the region of interest
as well as reducing its size. Figure 2-4 shows the possible adaptation of the subregion.
x1 rL,0 x (1)
x (1)
x (0) = x (1)
range r1 (1)
(2)
subregion
Figure 2-4: Adaptation of subregion in SRSM: (a) pure panning, (b) pure zooming and (c) a combination of
panning and zooming
The starting point x ( 0) will form the center point of the first region of interest. The lower and upper bounds
( xirL, 0 , xirR , 0 ) of the initial subregion are calculated using the specified initial range value ri( 0 ) so that
where n is the number of design variables. The modification of the ranges on the variables for the next
iteration depends on the oscillatory nature of the solution and the accuracy of the current optimum.
Oscillation: A contraction parameter γ is firstly determined based on whether the current and previous
designs x ( k ) and x ( k −1) are on the opposite or the same side of the region of interest. Thus an oscillation
indicator c may be determined in iteration k as
where
36 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
cˆ = c sign ( c ) . (2.57)
λ
λ,γ
γpan
η
ĉ
1
0 γosc
-1
1 |d|
Figure 2-5: The sub-region contraction rate λ as a function of the oscillation indicator ĉ and the absolute
move distance |d |
Accuracy: The accuracy is estimated using the proximity of the predicted optimum of the current iteration to
the starting (previous) design. The smaller the distance between the starting and optimum designs, the more
rapidly the region of interest will diminish in size. If the solution is on the bound of the region of interest,
the optimal point is estimated to be beyond the region. Therefore a new subregion, which is centered on the
current point, does not change its size. This is called panning (Figure 2-4(a)). If the optimum point coincides
with the previous one, the subregion is stationary, but reduces its size (zooming) (Figure 2-4(b)). Both
panning and zooming may occur if there is partial movement (Figure 2-4(c)). The range ri( k +1) for the new
subregion in the (k + 1)-th iteration is then determined by:
LS-OPT Version 2 37
CHAPTER 2: OPTIMIZATION METHODOLOGY
where λi represents the contraction rate for each design variable. To determine λi, d i( k ) is incorporated by
scaling according to a zoom parameter η that represents pure zooming and the contraction parameter γ to
yield the contraction rate
λi = η + d i( k ) (γ − η ) (2.60)
When used in conjunction with neural networks or Kriging, the same heuristics are applied as described
above. However the nets are constructed using all the available points, including those belonging to
previous iterations. Therefore the response surfaces are progressively updated in the region of the optimal
point.
Refer to Section 16.2 for the setting of parameters in the iterative Successive Response Surface Method.
The following example illustrates the convergence performance of the methodology. The example is an
unconstrained minimization problem with starting point [1,1,1,…,1], solution [0,0,0,…,0] and an initial
range of [0.5;1.5]n and the objective to minimize:
1n 2
∑ xi
n i =1
2
Available in Version 2.1
38 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
for n = 20, 50 and 100. In Figure 2-6 the successive linear response surface method is compared with the
random search method for 20, 50 and 100 variable optimization problems. In this example SRSM uses the
default number of simulations per iteration, namely 32, 77 and 152 respectively. D-optimal point selection is
used. The random search uses 20 LHS simulations per iteration. As expected, the cost increases with n for
both SRSM and SRS. Note the logarithmic trends of the convergence for both methods. Each interval on the
vertical axis represents an order or magnitude in accuracy.
The user should be aware that the search method picks the best observed design while the metamodeling
methods (especially NN and RSM) are designed to find the best average design.
10
1
Objective function value
100
0.1
50
0.01
100
0.001
0.0001
50 20
20
0.00001
0.000001
0 500 1000 1500
Number of simulations
Figure 2-6: Minimization of a quadratic polynomial. Efficiency comparison of linear response surface (■)
and random search (+) methods for 20, 50 and 100 variables. Each point is an iteration.
LS-OPT Version 2 39
CHAPTER 2: OPTIMIZATION METHODOLOGY
Conduct one iteration, usually by utilizing second-order approximations with a large range. Then assess the
adequacy of the surfaces using the error parameters. To improve the accuracy of the response surfaces, a
smaller region of interest can be chosen, but the trade-off properties will be artificially bounded by the range
of the subregion. To avoid this problem, points can be added to the design space and adaptable response
surfaces such as neural networks or Kriging can be used to improve prediction. The response surfaces are
then used to explore the design space, e.g. through trade-off studies. In a trade-off study, a constraint value
may be varied for the purpose of observing how the objective function and the optimal design changes.
40 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
• First-order approximations.
Because of the absence of curvature, it is likely that perhaps 5 to 10 iterations may be required for
convergence. The first-order approximation method turns out to be robust thanks to the successive
approximation scheme which addresses possible oscillatory behavior. Linear approximations may be
rather inaccurate to study trade-off, i.e., in general they make poor global approximations, but this is not
necessarily true and must be assessed using the error parameters.
• Second-order approximations.
Due to the consideration of curvature, a successive quadratic response surface method is likely to be
more robust, but can be more expensive, depending on the number of design variables.
• Other approximations.
Neural networks (Section 2.10.1) and Kriging (Section 2.10.2) provide good approximations when many
design points are used. A suggested approach is to start the optimization procedure in the full design
space, with the number of points at least of the order of the minimum required for a quadratic
approximation. To converge to an optimum, use the iterative scheme with domain reduction as with any
other approximations, but choose to update the experimental design and response surfaces after each
iteration (this is the default method for neural nets and Kriging). The response surface will be built using
the total number of points. At the end of the iterative optimization procedure, the trade-off can be based
on neural networks or Kriging surfaces using results from all the points calculated. See also Sections
2.10, 4.5.1 and 9.1.3.
• Random search methods. These methods can be used in very much the same way as first order
approximations, but are bound to be more expensive (Section 2.13).
A typical design formulation is somewhat distinct from the standard formulation for mathematical
optimization (Eq. 2.3). Most design problems present multiple objectives, design targets and design
constraints whereas the standard mathematical programming problem is defined in terms of a single
objective and multiple constraints. The standard formulation of Eq. (2.3) has been modified to represent the
more general approach as applied in LS-OPT.
LS-OPT Version 2 41
CHAPTER 2: OPTIMIZATION METHODOLOGY
Designs often contain objectives that are in conflict so that they cannot be achieved simultaneously. If the
one objective is improved, the other deteriorates and vice versa. The preference function p[ f ( x )] combines
the various objectives fi. The Euclidean distance function allows the designer to find the design with the
smallest distance to a specified set of target responses or design variables:
2
p
f ( x ) − Fi
p = ∑Wi i (2.62)
i =1 Γi
The symbols Fi represent the target values of the responses. A value Γi is used to normalize each response i.
Weights Wi are associated with each quantity and can be chosen by the designer to convey the relative
importance of each normalized response.
Maximum distance
Another approach to target responses is by using the maximum distance to a target value
f ( x ) − Fi
p = max i (2.63)
i
Γi
This form belongs to the same category of preference functions as the Euclidean distance function [15] and
is referred to as the Tchebysheff distance function. A general distance function for target values Fi is
defined as
1r
p f ( x) − F
r
p = ∑
i i
(2.64)
i =1 Γi
with r = 2 for the Euclidean metric and r → ∞ for the min.-max. formulation (Tchebysheff metric).
The approach for dealing with the Tchebysheff formulation differs somewhat from the explicit formulation.
The alternative formulation becomes:
Minimize e (2.65)
subject to
Fi f ( x ) Fi
− (1 − α jL )e ≤ i ≤ + (1 − α jU )e ; i = 1,K, p , j =1,K,m
Γi Γi Γi
e≥0
42 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
In the above equation, Γi is a normalization factor, e represents the constraint violation or target discrepancy
and α represents the strictness factor. If α = 0, the constraint is slack (or soft) and will allow violation. If α
= 1, the constraint is strict (or hard) and will not allow violation of the constraint.
The effect of distinguishing between strict and soft constraints on the above problem is that the maximum
violation of the soft constraints is minimized. Because the user is seldom aware of the feasibility status of
the design problem at the start of the investigation, the solver will automatically solve the above problem
first to find a feasible region. If the solution to e is zero (or within a small tolerance) the problem has a
feasible region and the solver will immediately continue to minimize the design objective using the feasible
point as a starting point.
• The variable bounds of both the region of interest and the design space are always hard. This is enforced
to prevent extrapolation of the response surface and the occurrence of impossible designs.
• Soft constraints will always be strictly satisfied if a feasible design is possible.
• If a feasible design is not possible, the most feasible design will be computed.
• If feasibility must be compromised (there is no feasible design), the solver will automatically use the
slackness of the soft constraints to try and achieve feasibility of the hard constraints. However, even
when allowing soft constraints, there is always a possibility that some hard constraints must still be
violated. In this case, the variable bounds could be violated, which is highly undesirable as the solution
will lie beyond the region of interest and perhaps beyond the design space. If the design is reasonable,
the optimizer remains robust and finds such a compromise solution without terminating or resorting to
any specialized procedure.
Soft and strict constraints can also be specified for search methods. If there are feasible designs with respect
to hard constraints, but none with respect to all the constraints, including soft constraints, the most feasible
design will be selected. If there are no feasible designs with respect to hard constraints, the problem is ‘hard-
infeasible’ and the optimization terminates with an error message.
In the following cases the use of the Min-Max formulation can be considered:
1. Minimize the maximum of several responses, e.g. minimize the maximum knee force in a vehicle
occupant simulation problem. This is specified by setting both the knee force constraints to have zero
upper bounds. The violation then becomes the actual knee force.
2. Minimize the maximum design variable, e.g. minimize the maximum of several radii in a sheet metal
forming problem. The radii are all incorporated into composite functions, which in turn are incorporated
into constraints which have zero upper bounds.
3. Find the most feasible design. For cases in which a feasible design region does not exist, the user may be
content with allowing the violation of some of the constraints, but is still interested in minimizing this
violation.
LS-OPT Version 2 43
CHAPTER 2: OPTIMIZATION METHODOLOGY
There is increasing interest in the coupling of other disciplines into the optimization process, especially for
complex engineering systems like aircraft and automobiles [35]. The aerospace industry was the first to
embrace multidisciplinary design optimization (MDO) [56], because of the complex integration of
aerodynamics, structures, control and propulsion during the development of air- and spacecraft. The
automobile industry has followed suit [58]. In [58], the roof crush performance of a vehicle is coupled to its
Noise, Vibration and Harshness (NVH) characteristics (modal frequency, static bending and torsion
displacements) in a mass minimization study.
Different methods have been proposed when dealing with MDO. The conventional or standard approach is
to evaluate all disciplines simultaneously in one integrated objective and constraint set by applying an
optimizer to the multidisciplinary analysis (MDA), similar to that followed in single-discipline optimization.
The standard method has been called multidisciplinary feasible (MDF), as it maintains feasibility with
respect to the MDA, but as it does not imply feasibility with respect to the disciplinary constraints, is has
also been called fully integrated optimization (FIO). A number of MDO formulations are aimed at
decomposing the MDF problem. The choice of MDO formulation depends on the degree of coupling
between the different disciplines and the ratio of shared to total design variables [78]. It was decided to
implement the MDF formulation in this version of LS-OPT as it ensures correct coupling between
disciplines albeit at the cost of seamless integration being required between different disciplines that may
contain diverse simulation software and different design teams.
In LS-OPT, the user has the capability of assigning different variables, experimental designs and job
specification information to the different solvers or disciplines. The file locations in Version 2 have been
altered to accommodate separate Experiments, AnalysisResults and DesignFunctions files in
each solver’s directory. An example of job-specific information is the ability to control the number of
processors assigned to each discipline separately. This feature allows allocation of memory and processor
resources for a more efficient solution process.
Refer to the user’s manual (Section 14.1) for the details of implementing an MDO problem. There are two
crashworthiness-modal analysis case studies in the examples chapter (Sections 17.6 and 17.7).
44 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
For this formulation, a targeted composite function (see Equation (10.1) in Section 10.4) in LS-OPT is used
to construct the residual:
2
R f j ( x) − Fj
LSR = F = ∑ (2.66)
j =1 Γj
where Fj are the experimental targets and Γ j are scaling factors required for the normalization or weighting
of each respective response.
In this formulation, the deviations from the respective target values are incorporated as constraint violations,
so that the optimization problem for parameter identification becomes:
Minimize e, (2.67)
subject to
f j ( x )− F j
≤ e ; j = 1,…,p
Γj
e≥0
This formulation is automatically activated in LS-OPT when specifying both the lower and upper bounds of
f j / Γ j equal to F j / Γ j . There is therefore no need to define an objective function. This is due to the fact that
an auxiliary problem is solved internally whenever an infeasible design is found, ignoring the objective
function until a feasible design is found. When used in parameter identification, the constraint set is in
general never completely satisfied due to the typically over-determined systems used.
Worst-case design involves minimizing an objective with respect to certain variables while maximizing the
objective with respect to other variables. The solution lies in the so-called saddle point of the objective
function and represents a worst-case design. This definition of a worst-case design is different to what is
sometimes referred to as min-max design, where one multi-objective component is minimized while another
is maximized, both with respect to the same variables.
One class of problems involves minimization design variables and maximization case or condition variables.
One example in automotive design is the minimization of head injury with respect to the design variables of
LS-OPT Version 2 45
CHAPTER 2: OPTIMIZATION METHODOLOGY
the interior trim while maximizing over a range of head orientation angles. Therefore the worst-case design
represents the optimal trim design for the worst-case head orientation. Another example is the minimization
of crashworthiness-related criteria (injury, intrusion, etc.) during a frontal impact while maximizing the
same criteria for a range of off-set angles in an oblique impact situation.
Another class of problems involves the introduction of uncontrollable variables zi , i = 1,K, n in addition to
the controlled variables y j , j = 1,K, m . The controlled variables can be set by the designer and therefore
optimized by the program. The uncontrollable variables are determined by the random variability of
manufacturing processes, loadings, materials, etc. Controlled and uncontrollable variables can be
independent, but can also be associated with one another, i.e. a controlled variable can have an
uncontrollable component.
2. The controlled and uncontrollable variables must be separated as minimization and maximization
variables. The objective will therefore be minimized with respect to the controlled variables and
maximized with respect to the uncontrollable variables. This requires a special flag in the optimization
algorithm and the formulation of Equation (2.1) becomes:
subject to
g j ( y, z ) ≤ 0 ; j = 1,2,K, l
The algorithm remains a minimization algorithm but with modified gradients:
∇ mod
y := ∇y
∇ mod
z := −∇z
For a maximization problem the min and max are switched.
3. The dependent set (the subset of y and z that are dependent on each other) x = y + z must be defined as
input for each simulation, e.g. if the manufacturing tolerance on a thickness is specified as the
uncontrollable component, it is defined as a variation added to a mean value, i.e. t = tmean + tdeviation,
where t is the dependent variable.
LS-OPT allows a limited reliability-based design capability by computing the standard deviation of any
response. The procedure uses the multidisciplinary optimization capability to separate the mean and random
responses components. The procedure is as follows:
46 LS-OPT Version 2
CHAPTER 2: OPTIMIZATION METHODOLOGY
1. Choose the design variables and divide them into pairs where each pair is represented by a mean
component and a random component. Uncontrollable random variables must be included in the list.
2. Assign the mean variables to a solver MEAN and the random variables to a solver _RANDOM_.
3. Change those variables defined in the MEAN solver and that are uncontrollable to constants.
4. Also change the random variables that are not desired to be random to constants.
5. Select suitable lower and upper bounds for the MEAN variables as for a standard design formulation.
6. The random variables are centered on the starting point so that the starting vector is [0, 0, …, 0]. Omit
the range but choose the upper and lower bounds to conform to the design tolerance, e.g., [-ε /2, + ε /2]
for a tolerance, ε. The tolerance allows a uniformly distributed scatter of the input variables.
7. The most suitable experimental design for the random variables is Latin Hypercube Sampling (LHS).
8. Define a dependent variable as the sum of the mean and random variables. There may be some constants
in both categories.
9. Substitute the mean variable names in the MEAN solver simulation deck and the dependent variable
names in the _RANDOM_ solver deck.
10. Define responses for the design criteria for both the MEAN and _RANDOM_ solvers. Also compute the
standard deviations of the _RANDOM_ responses.
11. Add (or subtract) fractions of the standard deviations to (from) the mean corresponding responses to
compensate for the sensitivity of the design.
The method above has the advantage that actual simulations are used to model stochastic behavior. The
standard deviations are assumed to be constant over the sub-region, but the method should converge if an
iterative scheme is used. An example is given in Section 17.2.9 to illustrate the method.
Care must be taken in interpreting the resulting reliability of the responses. Accuracy can be especially poor
at the tail ends of the response distribution.
For a (very) conservative estimate of the failure, Tchebysheff’s Theorem can be considered:
1 1
P(Y −µ ≥kσ )≤ 2
or equivalently, P(Y − µ < kσ ) ≥ 1 − 2
k k
with Y the stochastic response with mean µ, and variance σ2, and k a positive constant. The Theorem says
that the probability that the outcome lies outside k standard deviations of the mean is less than 1 2 .
k
Conversely, it also says that the probability that the outcome lies within k standard deviations of the mean is
LS-OPT Version 2 47
CHAPTER 2: OPTIMIZATION METHODOLOGY
at least 1 − 1 . The relationship is valid for any probability distribution. Note that both tails of the
k2
distribution are considered in the above.
The choice of a uniform distribution for the design variable also contributes to the error in the estimated
reliability. The analyst can adjust the distribution parameters (lower and upper bounds) for a more
conservative estimate of failure.
48 LS-OPT Version 2
3. Probabilistic Fundamentals
3.1 Introduction
No system will be manufactured and operated exactly as designed. Adverse combinations of design and
loading variation may lead to undesirable behavior or failure; therefore, if significant variation exists, a
probabilistic evaluation may be desirable.
The relationship between the control variables and the variance can be used to adjust the control process
variables in order to have an optimum process. The variance of the control and noise variables can be used
to predict the variance of the system, which may then be used for redesign. Knowledge of the interaction
between the control and noise variables can be valuable; for example, information such as that the
49
CHAPTER 3: PROBABILISTIC FUNDAMENTALS
dispersion effect of the material variation (a noise variable), may be less at a high process temperature (a
control variable) can be used to selected control variables for a more robust manufacturing process.
The components will then all follow the same distribution but the actual value of each component will
differ. Each duplicate component is in effect an additional variable and will result in additional
computational cost (contribute to the curse of dimensionality) for techniques requiring an experimental
design to build an approximation or requiring the derivative information such as FORM. Direct Monte Carlo
simulation on the other hand does not suffer from the curse of dimensionality but is expensive when
evaluating events with a small probability.
Design variables can be linked to have the same expected (nominal) value, but allowed to vary
independently according to the statistical distribution during a probabilistic analysis. One can therefore have
one design variable associated with many probabilistic variables.
The current version of LS-OPT provides only Monte Carlo evaluation of using approximations. Methods
considering the Most Probable Point (MPP) of failure will be included in future versions of LS-OPT.
The accuracy can be limited by the accuracy of the data used in the computations as well as the accuracy of
the simulation. The choice of methods depends on the desired accuracy and use of the reliability
information.
• Compute the distribution of the responses, in particular the mean and standard deviation.
• Compute reliability.
• Investigate design space – search for outliers.
50 LS-OPT Version 2
CHAPTER 3: PROBABILISTIC FUNDAMENTALS
The error of estimating p, the probability of an event, is a random value with the following variance
p (1 − p )
σθ2ˆ =
N
Which can be manipulated to provide a minimum sampling. A suggestion for the minimum sampling size
provided by Tu and Choi [68] is:
10
N=
P[G ( x) ≤ 0]
The above indicates that for a 10% estimated probability of failure; about 100 structural evaluations are
required with some confidence on the first digit of failure prediction. To verify an event having a 1%
probability; about a 1000 structural analyses are required, which usually would be too expensive.
A general procedure of obtaining the minimum number of sampling points for a given accuracy is illustrated
using an example at the end of this section. For more information, a statistics text (for example, reference
[40]) should be consulted. A collection of statistical tables and formulae such as the CRC reference [32] will
also be useful.
The variance of the probability estimation must be taken into consideration when comparing two different
designs. The error of estimating the difference of the mean values is a random variable with a variance of
σ12 σ22
σθ2ˆ = +
N1 N 2
with the subscripts 1 and 2 referring to the different design evaluations. The error of estimating the
difference of sample proportions is a random variable with a variance of
p (1 − p1 ) p2 (1 − p2 )
σθ2ˆ = 1 +
N1 N2
The Monte Carlo method can therefore become prohibitively expensive for computing events with small
probabilities; more so if you need to compare different designs.
The procedure can be sped up using Latin Hypercube sampling, which is available in LS-OPT. These
sampling techniques are described elsewhere in the LS-OPT manual. The experimental design will first be
computed in a normalized, uniformly distributed design space and then transformed to the distributions
specified for the design variables.
LS-OPT Version 2 51
CHAPTER 3: PROBABILISTIC FUNDAMENTALS
Example:
The reliability of a structure is being evaluated. The probability of failure is estimated to be 0.1 and must be
computed to an accuracy of 0.01 with a 95% confidence. The minimum number of function evaluations
must be computed.
For an accuracy of 0.01 we use a confidence interval having a probability of containing the correct value of
0.95. The accuracy of 0.01 is taken as 4.5 standard deviations large using the Tchebysheff’s theorem (see
appendix A), which gives a standard deviation of 0.0022. The minimum number of sampling points is
therefore:
pq (0.9)(0.1)
N= 2 = = 18595
σ (0.0022) 2
Tchebysheff’s theorem is quite conservative. If we consider the response to be normally distributed then for
an accuracy of 0.01 and a corresponding confidence interval having a probability of containing the correct
value of 0.95, the a confidence interval 1.96 standard deviations wide is required. The resulting standard
deviation is 0.0051 and the minimum number of sampling points is accordingly:
pq (0.9)(0.1)
N= 2 = = 3457
σ (0.051) 2
E[G ( X )]
β=
D[G ( X )]
with E and D the expected value and standard deviation operators respectively. A normally distributed
response can be assumed for the estimation of the probability of failure; that is the central limit theorem is
assumed to be valid. The probability of failure is then computed as:
Pf = Φ (−β)
with Φ(x) the cumulative distribution function of the normal distribution.
• Nonlinear responses: Say we have a normally distributed stress responses - this implies that fatigue
failure is not normally distributed.
• The variables are not normally distributed; for example, one is uniformly distributed. In which case:
o A small number of variables may not sum up to a normally distributed response, even for a
linear response.
o The response may be strongly dependent on the behavior of a single variable. The
distribution associated with this variable may then dominate the variation of the response.
The assumption is less valid at the tail regions of the distribution. Usually the tail regions are of specific
interest.
52 LS-OPT Version 2
CHAPTER 3: PROBABILISTIC FUNDAMENTALS
• The responses must be linear functions of the design variables. Or sufficiently linear, where
sufficiently linear requires:
o The constraint associated with the response is active (it is being evaluated close to the most
probable point).
o The linearized response is sufficiently accurate over a range encompassed by the variables.
• Normally distributed design variables
A very large number of function evaluations (millions) are possible considering that function evaluations
using the metamodels are very cheap. Accordingly, given an exact approximation to the responses, the exact
probability of an event can be computed.
The choice of the point about which the approximation is constructed has an influence on accuracy.
Accuracy may suffer if the metamodel is not accurate close to the failure initiation hyperplane, G (x) = 0 . A
metamodel accurate at the failure initiation hyperplane (more specifically the Most Probable Point of
failure) is desirable in the cases of nonlinear responses. The results should however be exact for linear
responses or quadratic responses approximated using a quadratic response surface.
Using approximations to search for improved designs can be very cost-efficient. Even in cases where
absolute accuracy is not good, the technique can still indicate whether a new design is comparatively better.
The number of FE evaluations required to build the approximations increases linearly with the number of
variables for linear approximations (the default being 1.5n points) and quadratically for quadratic
approximations (the default being 0.75(n+2)(n+1) points).
LS-OPT Version 2 53
USER’S MANUAL
LS-OPT Version 2 55
4. Design Optimization Process
• LS-OPT is command-file based. A graphical user interface, LS-OPTui, allows the entry and
modification of command files. The command language is flexible and easy-to-use. Simple
problems do not require complex descriptions.
• Design variables can be specified in either the preprocessor input files or the solver input files.
Solver input files are typically only useful for sizing or the variation of material constants or load
curves or intensities. Interfacing with a parametric preprocessor allows the designer to use the
full capability thereof for optimization, so that other forms of design such as shape optimization
can be made possible. Standard interfaces are provided for a number of well-known geometrical
preprocessors.
• The LS-DYNA interface is comprehensive. Response variables of the ASCII and binary
databases (including the LS-DYNA Ver. 970 Binout database) can be extracted. The time-
dependent responses are available at any time. Average, maximum and minimum responses over
time can be extracted.
• Time-dependent LS-DYNA response can be post-processed by filtering or integration.
• Metal forming criteria are provided.
• Job execution and result extraction can be conducted on remote nodes in parallel. Finished jobs
are immediately replaced by waiting jobs until completion. File transfer to and from the remote
nodes is automatic and occurs at standard file transfer (ftp) speed.
• Since designs are typically target-oriented, the post-processing utilities were designed to include
a comprehensive array of multi-criteria optimization features. Design trade-off curves can be
constructed. Min.-Max. problems, in which the objective is to minimize the maximum value of
several variables, can be addressed.
• The design space can be bounded by the design variables as well as the response variables. Thus
only ‘reasonable’ designs need to be analyzed. E.g. massive designs need not be incorporated.
• Multidisciplinary optimization (MDO) can be conducted using more than one solver and more
than one analysis case for each solver. Variables can be defined to be exclusive to disciplines,
but can also be shared. Experimental design and job execution data can be exclusive to each
discipline. The software also features the restart of an aborted parallel multidisciplinary analysis
schedule. Error terminations are tagged for post-processing purposes.
• Mode tracking can be activated for frequency based criteria (LS-DYNA interface).
57
CHAPTER 4: DESIGN OPTIMIZATION PROCESS
• A response surface and its optimal solution can be improved to a desired tolerance by successive
iteration. This process has been automated.
• An automated sequential random search method is available.
• Dependent variables, responses and composite functions can be defined using C-like
mathematical expressions.
• LS-OPT features a limited reliability-based design capability that allows simple statistical
analysis of result output. The computation of an optimum and reliable design has been
automated. Random, uncontrollable variables are allowed.
• Probabilistic Modeling and Monte Carlo Simulation.
• Version 3 of LS-OPT is being designed as a full Reliability-Based Design Optimization code.
Since the design optimization process is expensive, the designer should avoid discovering major flaws in the
model or process at an advanced stage of the design. Therefore the procedure must be carefully planned and
the designer needs to be familiar with the model, procedure and design tools well in advance. The following
points are considered important:
1. The user should be familiar with and have confidence in the accuracy of the model (e.g. finite element
model) used for the design. Without a reliable model, the design would make little or no sense.
2. Select suitable criteria to formulate the design. The responses represented in the criteria must be
produced by the analyses and be accessible to LS-OPT.
3. Request the necessary output from the analysis program and set appropriate time intervals for time-
dependent output. Avoid unnecessary output as a high rate of output will rapidly deplete the available
storage space.
4. Run at least one simulation using LS-OPT. To save time, the termination time of the simulation can be
reduced substantially. This exercise will test the response extraction commands and various other
features.
5. Just as in the case of traditional simulation it is advisable to dump restart files for long simulations.
LS-OPT will automatically restart a design simulation if a restart file is available. For this purpose, the
runrsf file is required when using LS-DYNA as solver.
6. Determine suitable design parameters. In the beginning it is important to select many rather than few
design variables. If more than one discipline is involved in the design, some interdisciplinary discussion
is required with regard to the choice of design variables.
58 LS-OPT Version 2
CHAPTER 4: DESIGN OPTIMIZATION PROCESS
7. Determine suitable starting values for the design parameters. The starting values are an estimate of the
optimum design. These values can be acquired from a present design if it exists. The starting design will
form the center point of the first region of interest.
8. Choose a design space. This is represented by absolute bounds on the variables that you have chosen.
The responses may also be bounded if previous information of the functional responses is available.
Even a simple approximation of the design response can be useful to determine approximate function
bounds for conducting an analysis.
9. Choose a suitable starting design range for the design variables. The range should be neither too small,
nor too large. A small design region is conservative but may require many iterations to converge or may
not allow convergence of the design at all. It may be too small to capture the variability of the response
because of the dominance of noise. It may also be too large, such that a large modeling error is
introduced. This is usually less serious as the region of interest is gradually reduced during the
optimization process.
10. Choose a suitable order for the design approximations when using polynomial response surfaces (the
default). A good starting approximation is linear because it requires the least number of analyses to
construct. However it is also the least accurate. The choice therefore also depends on the available
resources. However linear experimental designs can be easily augmented to incorporate higher order
terms.
If a large parallel computer or nodes on a network are available for distributed computing, it may not be
worth the trouble of first constructing a linear approximation, then continuing on to a higher order one.
Before using neural nets or Kriging surfaces as approximations, please consult Section 4.5.
After suitable preparation, the optimization process may now be commenced. At this point, the user has to
decide whether to use an automated iterative procedure or whether to firstly perform variable screening
(through ANOVA) based on one or a few iterations. Variable screening is important for reducing the
number of design variables, and therefore the overall computational time.
An automated iterative procedure can be conducted with any choice of approximating function. It
automatically adjusts the size of the subregion and automatically terminates whenever the stopping criterion
is satisfied. If a single optimal point is desired, this is probably the procedure to use. If there is a large
number of design variables, a linear approximation can be chosen.
However a step-by-step semi-automated procedure can be just as useful, since it allows the designer to
proceed more resourcefully. Computer time can be wasted with iterative methods, especially if handled
carelessly. It mostly pays to pause after every iteration (especially the first) to allow verification of the data
and design formulation and inspection of the results, including ANOVA data. In many cases it takes only 2
to 3 iterations to achieve a reasonably optimal design. An improvement of the design can usually be
achieved within one iteration.
LS-OPT Version 2 59
CHAPTER 4: DESIGN OPTIMIZATION PROCESS
1. Evaluate as many points as required to construct a linear approximation. Assess the accuracy of the
linear approximation using any of the error parameters. Inspect the main effects by looking at the
ANOVA results. This will highlight insignificant variables that may be removed from the problem. An
ANOVA is simply a single iteration run, typically using a linear response surface to investigate main
and/or interaction effects. The ANOVA results can be viewed in the post-processor.
2. If the linear approximation is not accurate enough, add enough points to enable the construction of a
quadratic approximation. Assess the accuracy of the quadratic approximation. Intermediate steps can be
added to assess the accuracy of the interaction and /or elliptic approximations.
3. If the second-order approximation is not accurate enough, the problem may be twofold:
In case (3a), different approaches can be taken. Firstly, the user should try to identify the source of the
noise. E.g., when considering acceleration-related responses, was filtering performed? Are sufficient
significant digits available for the response in the extraction database? Is mesh adaptivity used correctly?
Secondly, if the noise cannot be attributed to a specific numerical source, the process being modeled
may be chaotic or random, leading to a noisy response. In this case, the user could implement reliability-
based design optimization techniques as described in Section 2.15.5. Thirdly, other less noisy, but still
relevant, design responses could be considered as alternative objective or constraint functions in the
formulation of the optimization problem.
In most cases the source of discrepancy cannot be identified, so in either case a further iteration would
be required to determine whether the design can be improved.
4. Optimize the approximate subproblem. The solution will be either in the interior or on the boundary of
the subregion.
If the approximate solution is in the interior, the solution may be good enough, especially if it is close to
the starting point. It is recommended to analyze the optimum design to verify its accuracy. If the
accuracy of any of the functions in the current subproblem is poor, another iteration is required with a
reduced subregion size.
If the solution is on the boundary of the subregion the desired solution is probably beyond the region.
Therefore, if the user wants to explore the design space more fully, a new approximation has to be built.
The accuracy of the current response surfaces can be used as an indication of whether to reduce the size
of the new region.
60 LS-OPT Version 2
CHAPTER 4: DESIGN OPTIMIZATION PROCESS
The whole procedure can then be repeated for the new subregion and is repeated automatically when
selecting a larger number of iterations initially.
1. Test the robustness of the analysis model by running a few (perhaps two or three) designs in the extreme
corners of the chosen design space. Run these designs to their full term (in the case of time-dependent
analysis). Two important designs are those with all the design variables set at their minimum and
maximum values. The starting design can be run by selecting ‘0’ as the number of iterations in the Run
panel.
2. Check that the design variables, dependents and/or constants are substituted into the input file as
intended.
3. Modify the input to define the experimental design for a full analysis.
4. For a time dependent analysis or non-linear analysis, reduce the termination time or load significantly to
test the logistics and features of the problem and solution procedure.
5. Execute LS-OPT with the full problem specified and monitor the process.
• Global optimality. The Karush-Kuhn-Tucker conditions [Eqs. (2.3)] govern the local optimality of a
point. However, there may be more than one optimum in the design space. This is typical of most
designs, and even the simplest design problem (such as the well known 10-bar truss with 10 design
variables), may have more than one optimum. The objective is, of course, to find the global optimum.
Many gradient-based as well as discrete optimal design methods have been devised to address global
optimality rigorously, but as there is no mathematical criterion available for global optimality, nothing
LS-OPT Version 2 61
CHAPTER 4: DESIGN OPTIMIZATION PROCESS
short of an exhaustive search method can determine whether a design is optimal or not. Most global
optimization methods require large numbers of function evaluations (simulations). In LS-OPT, global
optimality is treated on the level of the approximate subproblem through a multi-start method
originating at all the experimental design points.
• Noise. Although noise may evince the same problems as global optimality, the term refers more to a
high frequency, randomly jagged response than an undulating one. This may be largely due to numerical
round-off and/or chaotic behavior. Even though the application of analytical or semi-analytical design
sensitivities for ‘noisy’ problems is currently an active research subject, suitable gradient-based
optimization methods which can be applied to impact and metal-forming problems are not likely to be
forthcoming. This is largely because of the continuity requirements of optimization algorithms and the
increased expense of the sensitivity analysis. Although fewer function evaluations are required,
analytical sensitivity analysis is costly to implement and probably even more costly to parallelize.
• Non-robust designs. Because RSM is a global approximation method, the experimental design may
contain designs in the remote corners of the region of interest which are prone to failure during
simulation (aside from the fact that the designer may not be remotely interested in these designs). An
example is the identification of the parameters of a monotonic load curve which in some of the
parameter sets proposed by the experimental design may be non-monotonic. This may cause unexpected
behavior and possible failure of the simulation process. This is almost always an indication that the
design formulation is non-robust. In most cases poor design formulations can be eliminated by providing
suitable constraints to the problem and using these to limit future experimental designs to a ‘reasonable’
design space (see Section 2.7).
• Impossible designs. The set of impossible designs represents a ‘hole’ in the design space. A simple
example is a two-bar truss structure with each of the truss members being assigned a length parameter.
An impossible design occurs when the design variables are such that the sum of the lengths becomes
smaller than the base measurement, and the truss becomes unassemblable. It can also occur if the design
space is violated resulting in unreasonable variables such as non-positive sizes of members or angles
outside the range of operability. In complex structures it may be difficult to formulate explicit bounds of
impossible regions or ‘holes’.
The difference between a non-robust design and an impossible one is that the non-robust design may show
unexpected behavior, causing the run to be aborted, while the impossible design cannot be synthesized at
all.
62 LS-OPT Version 2
CHAPTER 4: DESIGN OPTIMIZATION PROCESS
1. Conduct a variable screening procedure to determine important variables. Linear polynomials can be
used for this purpose.
2. Augment the existing design points with a space-filling method using approximately the same number of
points as would be required for a quadratic approximation. The greater the number of points, the better
the approximation. To fully exploit the updating feature and the potential for tradeoff studies, it is
recommended that the entire design space be used to construct the initial space-filling design.
3. Execute the simulation runs and find the predicted optimum point. Verify the design accuracy by
simulating the predicted design.
4. Combine with a second iteration if the first appears to be inaccurate. More iterations can be done
depending on the convergence requirement. The NN/Kriging-based optimization is automated using the
same domain-reduction scheme (see Section 2.12) as the successive polynomial response surface
method. However, in the case of neural networks or Kriging approximations, it makes sense to update
the experimental design, so updated (the default) should be selected.
The advantage of using neural networks or Kriging surfaces is the avoidance of having to choose a
polynomial order, the adaptability of the response surface, and the global nature of the final surface that can
subsequently be used for trade-off studies or reliability investigations.
LS-OPT Version 2 63
5. Graphical User Interface and
Command Language
This chapter introduces the graphical user interface, the command language and describes syntax rules for
names of variables, strings and expressions.
lsoptui [command_file]
The layout of the menu structure (Figure 5-1) mimics the optimization setup process, starting from the
problem description, through the selection of design variables and experimental design, the definition and
responses, and finally the formulation of the optimization problem (objectives and constraints). The run
information (number of processors, monitoring and termination criteria) is also controlled via LS-OPTui.
65
CHAPTER 5: GRAPHICAL USER INTERFACE AND COMMAND LANGUAGE
A description of the problem can be given in double quotes. This description is echoed in the lsopt_
input and lsopt_output files and in the plot file titles.
Example:
"Frontal Impact"
66 LS-OPT Version 2
CHAPTER 5: GRAPHICAL USER INTERFACE AND COMMAND LANGUAGE
The number of variables and constraints are echoed from the graphical user input. These can be modified by
the user in the command file.
Example:
variable 2
constraint 1
responses 2
objectives 2
The most important data commands are the definitions. These serve to define the various entities which
constitute the design problem namely solvers, variables, responses, objectives, constraints and composites.
The definition commands are:
solver package_name
constant name value
variable name value
dependent name value
history name string
response name string
composite name type type
composite name string
objective name entity weight
constraint name entity name
Each definition identifies the entity with a name and must be placed before any other occurrence of the
name.
LS-OPT Version 2 67
CHAPTER 5: GRAPHICAL USER INTERFACE AND COMMAND LANGUAGE
The Design Command Language (DCL) is used as a medium for defining the input to the design process.
This language is based on approximately 70 command phrases drawing on a vocabulary of about 70 words.
Names can be used to describe the various design entities. The command input file combines a sequence of
text commands describing the design optimization process. The command syntax is not case sensitive.
5.3.1 Names
Entities such as variables, responses, etc. are identified by their names. The following entities must be given
unique names:
solver
constant
variable
dependent
history
response
composite
objective
constraint
In addition to numbers 0-9, upper or lower case letters, a name can contain any of the following characters:
_-.
Note:
Because mathematical expressions can be constructed using various entities in the same formula,
duplication of names is not allowed.
68 LS-OPT Version 2
CHAPTER 5: GRAPHICAL USER INTERFACE AND COMMAND LANGUAGE
Preprocessor commands, solver commands or response extraction commands are enclosed in double quotes,
e.g.
In addition to numbers 0-9, upper or lower case letters and spaces, a command line can contain any of the
following characters:
_=-.’/<>;‘
In the command input file, a line starting with the character $ is ignored.
Input file names for the solver and preprocessor must be specified in double quotes.
• problem data
• solution tasks
The only command for specifying a task is the iterate command. All the remaining commands are for
the specification of problem data. A solution task command serves to execute a solver or processor while the
other commands store the design data in memory.
In the following chapters, the command descriptions can be easily found by looking for the large typescript
bounded by horizontal lines. Otherwise the reader may refer to the quick reference manual that also serves
as an index. The default values are given in angular brackets, e.g. < 1 >.
LS-OPT Version 2 69
CHAPTER 5: GRAPHICAL USER INTERFACE AND COMMAND LANGUAGE
5.3.5 Environments
Environments have been defined to represent all dependent entities that follow. The only environments in
LS-OPT are for
• solver identifier_name
All responses, response histories, solver variables, solver experiments and solver-related job information
defined within this environment are associated with the particular solver.
• strict, slack/soft Pertains to the strictness of constraints. See Sections 11.5.
• move, stay Pertains to whether constraints should be used to define a reasonable design space or
not for the experimental design. See Section 9.4.
5.3.6 Expressions
Each entity can be defined as a standard formula, a mathematical expression or can be computed with a
user-supplied program that reads the values of known entities. The bullets below indicate which options
apply to the various entities. Variables are initialized as specified numbers.
A list of mathematical and special function expressions that may be used is given in Appendix E :
Mathematical Expressions.
70 LS-OPT Version 2
CHAPTER 6: PROGRAM EXECUTION
6. Program Execution
This chapter describes the directory structure, output and status files and logistical handling of a simulation-
based optimization run.
The LSOPT environment is automatically set to the location of the lsopt executable.
LS-OPT Version 2 71
CHAPTER 6: PROGRAM EXECUTION
Command file
Input files
Output files Work directory
Plot files
Simulation files
Intermediate files
1.1 1.2 1.3 1.4 1.5 1.1 1.2 1.3 1.4 1.5
Status files
Plot files, e.g. FLD
Run directories
Figure 6-1 : Directory structure in LS-OPT
These sub-directories are named solver_ name/mmm.nnnn, where mmm represents the iteration number and
nnnn is a number starting from 1. solver_ name represents the solver interface specified with the command,
e.g.
In this case dyna is a reserved package name and side_impact is a solver name chosen by the user.
The work directory needs to contain at least the command file and the template input files. Various other
files may be required such as a command file for a preprocessor. An example of a sub-directory name,
defined by LS-OPT, is side_impact/3.11, where 3.11 represents the design point number of
iteration 3. The creation of subdirectories is automated and the user only needs to deal with the working
directory.
In the case of simulation runs being conducted on remote nodes, a replica of the run directory is
automatically created on the remote machine. The response.n and history.n files will automatically
be transferred back to the local run directory at the end of the simulation run. These are the only files
required by LS-OPT for further processing.
72 LS-OPT Version 2
CHAPTER 6: PROGRAM EXECUTION
In the batch version, the user may also type control-C to get the following response:
Jobs started
Got control C. Trying to pause scheduler ...
Enter the type of sense switch:
sw1: Terminate all running jobs
sw2: Get a current job status report for all jobs
t: Set the report interval
v: Toggle the reporting status level to verbose
stop: Suspend all jobs
cont: Continue all jobs
c: Continue the program without taking any action
Program will resume in 15 seconds if you do not enter a choice switch:
If v is selected, more detailed information of the jobs is provided, namely event time, time step, internal
energy, ratio of total to internal energy, kinetic energy and total velocity.
LS-OPT Version 2 73
CHAPTER 6: PROGRAM EXECUTION
6.6 Restarting
Restarting is conducted by giving the command:
lsopt command_file_name, or by selecting the Run button in the Run panel of LS-OPTui.
Completed simulation runs will be ignored, while half completed runs will be restarted automatically.
However, the user must ensure that an appropriate restart file is dumped by the solver by specifying its
name and dump frequency.
1. As a general rule, the run directory structure should not be erased. The reason is that on restart, LS-OPT
will determine the status of progress made during a previous run from status and output files in the
directories. Important data such as response values (response.n files), response histories
(history.n files) are kept only in the run directories and is not available elsewhere.
2. In most cases, after a failed run, the optimization run can be restarted as if starting from the beginning.
There are a few notable exceptions:
a. A single iteration has been carried out but the design formulation is incorrect and must be changed.
b. Incorrect data was extracted, e.g., for the wrong node or in the wrong direction.
c. The user wants to change the response surface type, but keep the original experimental design.
In the above cases, all the history.n and response.n files must be deleted. After restarting, the
data will then be newly extracted and the subsequent phases will be executed. A restart will only be able
to retain the data of the first iteration if more than one iteration was completed. The directories of the
other higher iterations must be deleted in their entirety. Unless the database was deleted (by, e.g., using
the clean file, see Section 6.9), no simulations will be unnecessarily repeated, and the simulation run
should continue normally.
3. A restart can be made from any particular iteration by selecting the ‘Specify Starting Iteration’ button on
the Run panel, and entering the iteration number. The subdirectories representing this iteration and all
higher-numbered iterations will be deleted after selecting the Run button and confirming the selection.
Note: Starting with Version 2.1 the user must delete these files when attempting a clean start.
74 LS-OPT Version 2
CHAPTER 6: PROGRAM EXECUTION
LS-OPT Version 2 75
CHAPTER 6: PROGRAM EXECUTION
The user interface LS-OPTui uses the message in the EXIT_STATUS file as a pop-up message.
The lfop.log file contains a log of the core optimization solver solution.
The simulation run/extraction log is saved in a file called lognnnnnn in the local run directory, where
nnnnnn represents the process ID number of the run. An example of a logfile name is log234771.
rm -rf d3*
rm -rf elout
rm -rf nodout
rm -rf rcforc
The clean file will be executed immediately after each simulation and will clean all the run directories
except the baseline (first or 1.1) and the optimum (last) runs. Care should be taken not to delete the lowest
level directories or the log files prepro, started, replace, finished, response.n or
history.n (which must remain in the lowest level directories). These directories and log files indicate
different levels of completion status which are essential for effective restarting. Each file
response.response_number contains the extracted value for the response: response_number. E.g., the
file response.2 contains the extracted value of response 2. The data is thus preserved even if all solver
data files are deleted. The response_number starts from 0.
76 LS-OPT Version 2
CHAPTER 6: PROGRAM EXECUTION
prepro
XPoint
replace
started
finished
response.0
response.1
.
.
history.0
history.1
.
.
Remarks:
If a parallel solver is used, the number of concurrent jobs used for the solution will be number_of_jobs times
the number of cpu’s specified for the solver.
Example:
concurrent jobs 16
LS-OPT Version 2 77
CHAPTER 6: PROGRAM EXECUTION
To run LS-OPT with a queuing (load-sharing) facility the following binary files are provided in the /bin
directory which un-tars from the tar file during installation of LS-OPT:
bin/wrappers/wrapper_*
bin/wrappers/perl_*
bin/wrappers/taurus_*
bin/runqueuer
bin/DynaMass
1. Create a directory on the remote machine for keeping all the executables including lsdyna.
(a) Copy the appropriate executable wrapper_* program located in the bin/wrappers directory
to this directory. E.g. if you are running LS-DYNA on HP, place wrapper_hp on this machine.
Rename it to wrapper.
(b) Do the same for the appropriate perl_* program and rename it to perl.
(c) Do the same for the appropriate taurus_* program and rename it to taurus.
(d) Copy the DynaMass script to the same directory.
3
Registered Trademark of Platform Computing Inc.
4
Registered Trademark of International Business Machines Corporation
5
Portable Batch System. Registered Trademark of Veridian Systems
6
Network Queuing Environment. Registered Trademark of Cray Inc.
78 LS-OPT Version 2
CHAPTER 6: PROGRAM EXECUTION
Local installation
2. Select the queuer option in LS-OPTui or add a statement in the LS-OPT command file to identify the
queuing system, e.g.
or
Example:
3. Change the script you use to run the solver via the queuing facility by prepending wrapper to the
solver execution command. Use full path names for both the wrapper and executable or make sure the
path on the remote machine includes the directory where the executables are kept.
The argument for the input deck specified in the script must always be the LS-OPT reserved name for
the chosen solver, e.g. for LS-DYNA use DynaOpt.inp .
Examples:
/vclass/user/bin/wrappers/wrapper_hp \
/vclass/dyna/ls970 i=DynaOpt.inp memory=2m 2> a.err
The following command is executed within a UNIX script which executes a second script to run the
DYNA/MPP job:
/vclass/user/bin/wrappers/wrapper_hp \
/vclass/dyna/mppscript mppopt DynaOpt.inp 2
#!/bin/sh
/opt/mpi/bin/mpirun /vclass/dyna/mpp970 -np $3 i=$2 2> $1.err
cat dbout.* > dbout
/vclass/dyna/dumpbdb dbout
LS-OPT Version 2 79
CHAPTER 6: PROGRAM EXECUTION
The wrapper will not execute multiple commands and therefore a script is required in this case.
Wrapping the entire script ensures (i) all the result files are available for extraction as soon as the
wrapper is terminated and (ii) the sense switches will alow job termination.
#!/bin/sh
#
inputfile=$1
precision=$2
version=$3
#
jobname=`echo $inputfile | sed 's/\./ /g' | awk '{print $1}'`
#
version='970'
#
input_dir=`pwd`
local_cnt=`echo $input_dir | sed 's/\// /g' | wc | awk '{print $2}'`
precnt=`expr $local_cnt - 1`
local_pre=`echo $input_dir | sed 's/\// /g' | awk '{print $'"$precnt"'}'`
local_dir=`echo $input_dir | sed 's/\// /g' | awk '{print $'"$local_cnt"'}'`
#
cd ${input_dir}
#
export WORKDIR="/net/o300"$input_dir
#
echo " ------------------------------------------"
echo " This job will run under directory $WORKDIR"
echo " ------------------------------------------"
#
cat > ${jobname}_${local_dir}.job << EOF
#BSUB -J ${local_pre}_${local_dir}
# /bin/sh
#
cd $WORKDIR
#
export LSTC_FILE=/usr/company/license/LSTC_FILE
echo i=$inputfile memory=5000000 > commandline
exit
#
EOF
#
# LSF queuing command
#
bsub -m vclass.lstc.com < ${jobname}_${local_dir}.job &
#
80 LS-OPT Version 2
CHAPTER 6: PROGRAM EXECUTION
This command produces a result database in a subdirectory called newdatabase. The Version 2a LS-
OPTui can be executed from inside the newdatabase directory.
LS-OPT Version 2 81
CHAPTER 6: EXECUTION
7. Interfacing to a solver or
preprocessor
This chapter describes how to interface LS-OPT with a simulation package and/or a parametric
preprocessor. Standard interfaces as well as interfaces for user-defined executables are discussed.
The design variables must be identified using the keywords <<expression>> in the input file where
expression is an expression which incorporates constants, design variables or dependents.
Inserting the relevant design variable or expression into the preprocessor command file requires that a
preprocessor command such as
be replaced with
where the design variable named Radius is the radius of the fillet.
Similarly if the design variables are to be specified using a Finite Element input deck then data lines such as
*SECTION_SHELL
1, 10, , 3.000
0.002, 0.002, 0.002, 0.002
*SECTION_SHELL
1, 10, , 3.000
<<Thickness_3>>,<<Thickness_3>>,<<Thickness_3>>,<<Thickness_3>>
LS-OPT Version 2 83
CHAPTER 7: INTERFACING TO A SOLVER OR PREPROCESSOR
If a solver but no preprocessor has been specified only the relevant solver utility routines will be executed.
The field width of the substituted variable has been set to 10 with three digits after the decimal point. (The
C language notation is %10.3e). Care must be taken not to exceed the maximum field width tolerated by
the simulation package. Consult the relevant User’s manual for rules regarding input format.
Both the preprocessor and solver input and append files are specified in this panel. Multiple solvers (as used
in multi-case or multi-disciplinary applications) are defined by selecting ’Add solver’. The ’Replace’ button
must be used after the modification of current data.
Execution command. The command to execute the solver must be specified. The command depends on the
solver type and could be a script, but typically excludes the solver input file name argument as this is
specified using a separate command.
Input template files. An input template file in which the design variables have been replaced by the
keywords <<name>> can be specified. LS-OPT converts the template to an input deck for the preprocessor
or solver by replacing each entire string <<name>> with a number. During run-time, LS-OPT appends a
standard input deck name to the end of the execution command. In the case of the standard solvers, the
appropriate syntax is used (e.g. i=DynaOpt.inp for LS-DYNA). For a user-defined solver, the name
UserOpt.inp is appended. The specification of an input file is not required for a user-defined solver.
Appended file. Additional solver data can be appended to the input deck using the
solver_append_file_name file. This file can contain variables to be substituted.
A report interval can be specified in the command-line version of LS-OPT only. A progress report is given
for the runs at regular intervals. This report identifies jobs waiting, jobs running, jobs completed and jobs
aborted as well as other statistics such as time remaining and relative progress toward completion.
84 LS-OPT Version 2
CHAPTER 6: EXECUTION
LS-OPT Version 2 85
CHAPTER 7: INTERFACING TO A SOLVER OR PREPROCESSOR
Example:
More than one analysis case may be run using the same solver. If a new solver is specified, the data items
not specified will assume previous data as default. All commands assume the current solver.
Remarks:
• The name of the solver will be used as the name of the sub-directory to the working directory.
• The command solver package_identifier name initializes a new solver environment. All
subsequent commands up to the next “solver name” command will apply to that particular solver.
This is particularly important when specifying response name commandline commands as each
response is assigned to a specific solver and is recovered from the directory bearing the name of the
solver. (See Section 10).
• Do not specify the command nohup before the solver command and do not specify the UNIX
background mode symbol &. These are automatically taken into account.
• The solver command name must not be an alias. The full path name (or the full path name of a
script which contains the full solver path name) must be specified.
The LS-DYNA restart command will use the same command line arguments as the starting command line,
replacing the i=input file with r=runrsf.
The LS-DYNA MPP (Message Passing Parallel) version can be run using the LS-DYNA option in the
”Solver” window of LS-OPTui (same as the dyna option for the solver in the command file). However,
the run commands must be specified in a script, e.g. the UNIX script runmpp:
86 LS-OPT Version 2
CHAPTER 6: EXECUTION
dumpbdb dbout
Remarks:
1. DynaOpt.inp is the reserved name for the LS-DYNA MPP input file name. This file is normally
created in the run directory by LS-OPT after substitution of the variables or creation by a preprocessor.
The original template file can have a different name and is specified as the input file in the solver
input file command.
3. The file dumpbdb for creating the ASCII database must be executable.
(a) path relative to the run directory: two levels above the run directory (see example above).
(b) absolute path, e.g. "/origin/users/john/crash/runmpp"
(c) in a directory which is in the path. In this case the command is:
solver command "runmpp".
5. LS-DYNA MPP will only give continual progress feedback through LS-OPT for version 960 and later.
An own solver can be specified using the solver own solvername command, or selecting User-defined in
LS-OPTui. The solver command " " can either execute a command, or a script. The substituted input
file UserOpt.inp will automatically be appended to the command or script. Variable substitution will be
performed in the solver input file (which will be renamed UserOpt.inp) and the solver
append file. If the own solver does not generate a ‘Normal’ termination command to standard output,
the solver command must execute a script that has as its last statement the command:
echo ‘N o r m a l’.
Example:
LS-OPT Version 2 87
CHAPTER 7: INTERFACING TO A SOLVER OR PREPROCESSOR
7.3 Preprocessors
The preprocessor must be identified as well as the command used for the execution. The command file
executed by the preprocessor to generate the input deck must also be specified. The preprocessor
specification is valid for the current solver environment.
The interfacing of a preprocessor involves the specification of the design variables, input files and the
preprocessor run command. Interfacing with LS-INGRID, TrueGrid7 and AutoDV8 is detailed in this
section. The identification of the design variables in the input file is detailed in Section 7.1.
7.3.1 LS-INGRID
The identifier in the prepro section for the use of LS-INGRID is ingrid. The file ingridopt.inp
is created from the LS-INGRID input template file.
Example:
This will allow the execution of LS-INGRID using the command “ingrid i=ingridopt.inp –d
TTY”. The file ingridopt.inp is created by replacing the << name >> keywords in the p9i file with
the relevant values of the design variables.
7.3.2 TrueGrid
The identifier in the prepro section for the use of TrueGrid is truegrid. This will allow the execution
of TrueGrid using the command “prepro program_name i=TruOpt.inp". The file TruOpt.inp
7
Registered Trademark of XYZ Scientific Applications, Inc.
8
Registered Trademark of Altair Engineering, Inc.
88 LS-OPT Version 2
CHAPTER 6: EXECUTION
is created by replacing the << name >> keywords in the TrueGrid input template file with the relevant
values of the design variables.
Example:
These lines will execute TrueGrid using the command “tgx i=cyl” having replaced all the keyword
names << name >> in cyl with the relevant values of the design variables.
write end
7.3.3 AutoDV
The geometric preprocessor AutoDV can be interfaced with LS-OPT which allows shape variables to be
specified. The identifier in the prepro section for the use of AutoDV is templex (the name of an
auxiliary product: Templex9). The use of AutoDV requires several input files to be available.
1. Input deck: At the top, the variables are defined as DVAR1, DVAR2, etc. along with their current values.
The default name is input.tpl. This file is specified as the prepro input file.
2. Control nodes file: This is a nodal template file used by Templex to produce the nodal output file using
the current values of the variables. This file is specified using the prepro controlnodes
command. The default name is nodes.tpl.
3. A coefficient file that contains original coordinates and motion vectors specified in two columns must be
available. The command used is prepro coefficient file and the default file name is
nodes.shp.
4. Templex produces a nodal output file that is specified under the solver append file command.
The default name is nodes.include.
9
Registered Trademark of Altair Engineering, Inc.
LS-OPT Version 2 89
CHAPTER 7: INTERFACING TO A SOLVER OR PREPROCESSOR
Example:
$
$ DEFINITION OF SOLVER "1"
$
solver dyna ’1’
solver command "lsdyna"
solver append file "nodes.include"
solver input file "dyna.k"
prepro templex
prepro command "/origin_2/user/mytemplex/templex"
prepro input file "a.tpl"
prepro coefficient file "a.dynakey.node.tpl"
prepro controlnodes file "a.shp"
The prepro command will enable LS-OPT to execute the following command in the default case:
Remarks:
1. LS-OPT (<<varname>> type) substitutions can be specified in the Templex input file and the solver
input file.
2. LS-OPT uses the name of the variable on the DVARi line of the input file:
{parameter(DVAR1,"Radius_1",1,0.5,3.0)}
{parameter(DVAR2,"Radius_2",1,0.5,3.0)}
to replace the variables and bounds at the end of each line by the current values. This name, e.g.
Radius_1 must therefore also be defined in the LS-OPT command file (see Section 8.1). The DVARi
designation is not changed in any way, so, in general there is no relationship between the number or rank
of the variable specified in LS-OPT and the number or rank of the variable as represented by i in DVARi.
90 LS-OPT Version 2
CHAPTER 6: EXECUTION
3. LS-OPT can also replace variables in the Templex input file using the standard LS-OPT <<varname>>
notation. This would normally not be required as typically only the DVARi lines are modified for the
Templex run. Therefore, typically, no manual changes are required after creation of the Templex input
file using variable names that are consistent with the variable names defined in the LS-OPT command
file.
In its simplest form, the prepro own preprocessor can be used in combination with the design point file:
XPoint to read the design variables from the run directory. Only the prepro command statement will
therefore be used, and no input file (prepro input file) will be specified.
The user-defined prepro command will be executed with the standard preprocessor input file
UserPreproOpt.inp appended to the command. The UserPreproOpt.inp file is generated after
performing the substitutions in the prepro input file specified by the user.
Example:
prepro own
prepro command "gambit -r1.3 -id ../../casefile -in "
prepro input file "setup.jou"
Alternatively, a script can be executed with the prepro command to perform any number of command
line commands that result in the generation of a file called: UserOpt.inp for use by an own solver, or
DynaOpt.inp for use by LS-DYNA.
LS-OPT Version 2 91
8. Design Variables, Constants
and Dependents
This chapter describes the definition of the input variables, constants and dependents, design space and the
initial subregion.
All the items in this chapter are specified in the Variables panel in LS-OPTui (Figure 8-1). Shown is a
multidisciplinary design optimization (MDO) case where not all the variables are shared. E.g., t_bumper
in Figure 8-1 is only associated with the solver CRASH.
93
CHAPTER8: DESIGN VARIABLES, CONSTANTS AND DEPENDENTS
Example:
Example:
Both the lower and upper bounds must be specified, as they are used for scaling.
Example:
$ RANGE OF ’Area’
range ’Area’ 0.4
94 LS-OPT Version 2
CHAPTER8: DESIGN VARIABLES, CONSTANTS AND DEPENDENTS
Remarks:
1. A value of 25-50% of the design space can be chosen if the user is unsure of a suitable value.
3. The region of interest is centered on a given design and is used as a sub-space of the design space to
define the experimental design. If the region of interest protrudes beyond the design space, it is moved
without contraction to a location flush with the design space boundary.
8.6 Constants
Each variable above can be modified to be a constant. See Figure 8-2 where this is the case for t_bumper.
1. to define constant values in the input file such as π, e or any other constant that may relate to the
optimization problem, e.g. initial velocity, event time, integration limits, etc.
LS-OPT Version 2 95
CHAPTER8: DESIGN VARIABLES, CONSTANTS AND DEPENDENTS
2. to convert a variable to a constant. This requires only changing the designation variable to constant in
the command file without having to modify the input template. The number of optimization variables is
thus reduced without interfering with the template files.
Example:
In this case, the dependent is of course not a variable, but a constant as well.
Dependent variables are specified using mathematical expressions (see Appendix E).
The string must conform to the rules for expressions and be placed in curly brackets. The dependent
variables can be specified in an input template and will therefore be replaced by their actual values.
Example:
96 LS-OPT Version 2
CHAPTER8: DESIGN VARIABLES, CONSTANTS AND DEPENDENTS
Example:
LS-OPT Version 2 97
CHAPTER8: DESIGN VARIABLES, CONSTANTS AND DEPENDENTS
98 LS-OPT Version 2
9. Point Selection Schemes and
Surface Approximations
This chapter describes the specification of the point selection schemes and response surface types.
The following options for experimental designs (point selection schemes) are available for use with
polynomial response surfaces:
Table 9-1: Polynomial response surface experimental design and approximation order
Experiment Description Identifier Default approximation order
Linear Koshal lin_koshal linear
Quadratic Koshal quad_koshal quadratic
Central Composite composite quadratic
Latin Hypercube latin_hypercube linear
Monte Carlo (batch version only) monte_carlo linear
Plan plan linear
User-defined own linear
D-optimal dopt linear
Space filling space_filling -
10
Available in Version 2.1
99
CHAPTER 9: POINT SELECTION SCHEMES AND SURFACE APPROXIMATIONS
Factorial Designs
n
2 2toK Linear
3n 3toK quadratic
M M M
11n 11toK quadratic
Example:
In LS-OPTui, the experimental design is selected using the ExpDesign panel (Figure 9-1):
The default options are preset, e.g., the D-optimal point selection scheme (basis type: Full Factorial, 3 points
per variable) is the default for linear polynomials (see Figure 9-1), and the space-filling scheme is the
default for the Neural Net and Kriging methods.
The D-optimal design criterion can be used to select the best set of points from a given set of points. The
given set can be chosen as any of the other experimental designs and is referred to here as the basis
experiment. The order of the functions used has an influence on the distribution of the optimal experimental
points.
Order: The order of the functions that will be used. Linear, linear with interaction, elliptic or quadratic.
Basis experiment: The set of points from which the D-optimal design points must be chosen.
Number experiment: The number of experimental points that must be selected.
Number basis experiments: The number of basis experimental points (only random, latin
hypercube and space filling).
Example:
Order quadratic
experimental design dopt
basis experiment 5toK
Number experiment 9
Remarks:
1. Only the order of the experiment needs to be specified. The default experimental design is the D-optimal
design using a basis experimental design of 5n for quadratic and elliptic approximations and 3n for
linear. The default number of points selected is int(1.5(n + 1)) + 1 for linear, int(1.5(2n + 1)) + 1 for
elliptic, int(0.75(n2 + n + 2)) + 1 for interaction, and int(0.75(n + 1)(n + 2)) + 1 for quadratic. This
results in about 50% more points being analyzed than the minimum required. If the user wants to
override this number of experiments, the command solver number experiment has to be used
after solver order function_order.
2. The number of experiments is reduced by the number already available in the Experiments.PRE or
AnalysisResults.PRE files.
3. A higher-order experimental design is more accurate at the cost of more analysis runs.
4. The Monte Carlo, Latin Hypercube and Space-Filling point selection schemes require a user-specified
number of experiments.
6. The files Experiments and AnalysisResults will always have the same experiments after
extraction of results. Both these files also mirror the result directories for a specific iteration.
Latin Hypercube
The Latin Hypercube design is useful to construct a basis experimental design for the D-optimal design for a
large number of variables where the cost of using a full factorial design is excessive. E.g. for 15 design
variables, the number of basis points for a 3n design is more than 14 million.
Even if the Latin Hypercube design has enough points to fit a response surface, there is a likelihood of
obtaining poor predictive qualities or near singularity during the regression procedure. It is therefore better
to use the D–optimal experimental design for polynomials.
Example:
Order linear
experimental design latin_hypercube
Number experiment 20
The Latin Hypercube point selection scheme is also well suited to sequential random search methods (see
Section 2.13).
Space filling*
Only algorithm 5 (see Section 2.6.6) is available in LS-OPTui. This algorithm maximizes the minimum
distance between experimental design points. The only item of information that the user must provide for
this point selection scheme, is the number of experimental design points. Space filling is useful when used
in conjunction with the Neural Net (neural network) and Kriging methods (see Section 9.1.3).
If analytical sensitivities are available, they must be provided for the relevant function in a file named
Gradient. The values should be placed on a single line, separated by spaces.
In LS-OPTui, the Response Surface Method field must be set to Analytical Sensitivities. See Figure 9-2.
To use numerical sensitivities, select Numerical Sensitivities in the Response Surface Method field in
LS-OPTui and assign the perturbation as a fraction of the design space.
Numerical sensitivities are computed by perturbing n points relative to the current design point x0, where the
j-th perturbed point is:
δ ij = 0 if i ≠ j and 1.0 if i = j . The perturbation constant ε x is relative to the design space and is given as:
Example:
To apply neural network or Kriging approximations, select the appropriate option in the Response Surface
Method field in LS-OPTui. See Figure 9-3. The following Point Selection Schemes can be used with neural
networks or Kriging, and the user can select either a sub-region approach, or update the number of points
used in the approximation with each optimization iteration. An updated network is fitted to all the points.
Figure 9-3: Selecting the Neural network approximation method in the ExpDesign panel
It is recommended that the Space-Filling design be used with neural network or Kriging approximations,
especially with a large number of design variables. Please refer to Section 4.5 for recommendations on how
to use these approximations.
In LSOPTui, the approximation order is set in the Response Surface Order field. Neural networks or Kriging
(Section 4.5) are selected with the Optimization Method field (see Figure 9-3).
If a user wants to specify an experimental design plan in all iterations, a file Experiments.PLAN must
be supplied. The point coordinates must be normalized to the bounds [-1; 1].
The move/stay commands can be used to define an environment in which the constraint bound
commands (Section 11.4) can be used to double as bounds for the reasonable design space.
The move start option moves the designs to the starting point instead of the center point (see Section 2.7).
Example 1:
The example above shows the lines required to determine a set of points that will be bounded by an upper
bound on the Energy.
Example 2:
This specification of the move command ensures that the points are selected such that the sum of the two
variables does not exceed 50.
Remarks:
1. For constraints that are dependent on simulation results, a reasonable design space can only be created if
response functions have already been created by a previous iteration. The mechanism is as follows:
Automated design: After each iteration, the program converts the database file
DesignFunctions to file DesignFunctions.PRE. DesignFunctions.PRE then
defines a reasonable design space and is read at the beginning of the next design iteration.
Manual (semi-automated) Procedure: If a reasonable design space is to be used, the user must
ensure that a file DesignFunctions.PRE.solver_name is resident in the working directory
before starting an iteration. This file can be copied from the DesignFunctions file resulting
from a previous iteration.
2. A reasonable design space can only be created using the D-optimal experimental design.
3. The reasonable design space will only be created if the center point (or the starting point in the case of
move start) of the region of interest is feasible.
Feasibility is determined within a tolerance of 0.001*| fmax – fmin| where fmax and fmin are the maximum
and minimum values of the interpolated response over all the points.
4. The move feature should be used with caution, since a very tightly constrained experimental design may
generate a poorly conditioned response surface.
where string is the name of the master solver in single quotes, e.g.
11
Available in Version 2.1
This chapter describes the specification of histories and responses. The standard response interfaces for
LS-DYNA are also discussed.
The string is an interface definition (in double quotes), while the math_expression is a mathematical
expression (in curly brackets).
A user-defined program may be used to extract a history file from the database. The program must produce
an output file with the reserved name LsoptHistory. This file contains two columns of data, separated
by whitespace (a space or tab) or the following characters: comma (,), semi-colon (;) or equal sign(=).
Lines that do not have the recognizable format will be ignored, so that files with headers or footers do not
need to be specially modified.
Example 1:
history ’displacement_1’ "DynaASCII nodout r_disp 12789 TIMESTEP 0.0 SAE 60"
history ’displacement_2’ "DynaASCII nodout r_disp 26993 TIMESTEP 0.0 SAE 60"
history ’deformation’ expression {displacement_2 - displacement_1}
response ’final_deform’ expression {deformation(200)}
Example 2:
109
CHAPTER 10: HISTORIES AND RESPONSES
Example 3:
Example 4:
In this example a user-defined program (the post-processor LS-PREPOST) is used to produce a history file
from the LS-DYNA database. The LS-PREPOST command file get_force:
Remark:
1. Histories are used by response definitions (see Section 10.2) to define a response surface. They are
therefore intermediate entities and cannot be used directly to define a response surface. Only
response can define a response surface.
2. For LS-DYNA history definition and syntax, please refer to Section 10.5.
Each extracted response is identified by a name and the command line for the program that extracts the
results. The command line must be enclosed in double quotes. If scaling and/or offsetting of the response is
required, the final response is computed as ( the extracted response x scale factor ) + offset. This operation
can also be achieved with a simple mathematical expression.
A mathematical expression for a response is defined in curly brackets after the response name.
Example:
response ’Displacement_x’ 25.4 0.0 "DynaASCII nodout ’r disp’ 63 TIMESTEP 0.1"
response ’Force’ "$HOME/ownbin/calculate force"
response ’Displacement_y’ "calc constraint2"
response ’Disp’ expression {Displacement_x + Displacement_y}
Remarks:
1. The first command will use a standard interface for the specified solver package. The standard interfaces
for LS-DYNA are described in Section 10.5.
2. The middle two commands are used for a user-supplied interface program (see Section 10.12). The
interface name must either be in the path or the full path name must be specified. Aliases are not
allowed.
3. For the last command, the second argument expression is a reserved name.
The default of the approximation order is the order of the D-optimal experimental design (Section2.6.4) or
as indicated in Section 2.6 for the standard designs. FF refers to the feed-forward neural network
approximation method (see Sections 0 and 4.5).
Example:
1. Expression composite: A general expression can be specified for a composite. The composite can
therefore consist of constants, variables, dependent variables, responses and other composites.
2. Standard composite:
(a) Targeted composite: This is a special composite in which a target is specified for each response or
variable. The composite is formulated as the ‘distance’ to the target using a Euclidean norm
formulation. The components can be weighted and normalized.
2 2
m f j ( x) − Fj n
xi − X i
F = ∑W j + ∑ω i (10.1)
j =1 σj i =1 χi
where σ and χ are scale factors and W and ω are weight factors. These are typically used to
formulate a multi-objective optimization problem in which F is the distance to the target values of
design and response variables.
A suitable application is parameter identification. In this application, the target values Fj are the
experimental results that have to be reproduced by a numerical model as accurately as possible. The
scale factors σj and χi are used to normalize the responses. The second component, which uses the
variables can be used to regularize the parameter identification problem. Only independent variables
can be included. See Figure 10-3 for an example of a targeted composite response definition.
(b) Weighted composite: Weighted response functions and independent variables are summed in this
standard composite. Each function component or variable is scaled and weighted.
m f j ( x) n
xi
F = ∑W j + ∑ω i (10.2)
j =1 σj i =1 χi
These are typically used to construct objectives or constraints in which the responses and variables
appear in linear combination.
Remarks:
2. An objective definition involving more than one response or variable requires the use of a composite
function.
3. In addition to specifying more than one function per objective, multiple objectives can be defined (see
Section 11.2).
This command identifies the composite function. The type of composite is specified as weighted,
targeted or expression. The expression composite type does not have to be declared and can simply
be stated as an expression.
The math_expression is a mathematical expression given in curly brackets (see Appendix E).
The number of composite functions to be employed must be specified in the problem description.
The value is the target value for type: targeted and the weight value for the type: weighted. The
scale_factor is a divisor.
Example:
2 2
f − 20 f 4 − 35
for the composite function Fdamage = 3 + .
30 25
Example:
for the composite function which defines the inequality x10 > x9.
If weights are required for the targeted function, an additional command may be given.
Example:
1. Standard LS-DYNA result interfaces. This interface provides access to the ASCII and binary databases
(d3plot or LSDA) of LS-DYNA. The interface is an integral part of LS-OPT except for the extraction
of mass properties that relies on a Perl program, DynaMass. The perl compiler is included in the
same directory as lsopt during installation.
2. User specified interface programs. These can reside anywhere. The user specifies the full path.
Aside of the standard interfaces that are used to extract any particular data item from the database,
specialized responses for metal-forming are also available. The computation and extraction of these
secondary responses are discussed in Section 10.8.
The user must ensure that the LS-DYNA program will provide the output files required by LS-OPT.
As multiple result output sets are generated during a parallel run, the user must be careful not to generate
unnecessary output. The following rules should be considered:
• To save space, only those output files that are absolutely necessary should be requested.
• A significant amount of disk space can be saved by judiciously specifying the time interval between
outputs (DT). E.g. in many cases, only the output at the final event time may be required. In this case the
value of DT can be set slightly smaller than the termination time.
• The result extraction is done immediately after completion of each simulation run. Database files can be
deleted immediately after extraction if requested by the user (clean file (see also Section 6.9)).
• If the simulation runs are executed on remote nodes, the responses of each simulation are extracted on
the remote node and transferred to the local run directory.
For more specialized responses the Perl programs provided can be used as templates for the development of
own routines.
10.6.1 Mass
Example:
Remarks:
)
Theory: Mode tracking is required during optimization using modal analyses as mode switching (a change
in the sequence of modes) can occur as the optimizer modifies the design variables. In order to extract the
frequency of a specified mode, LS-OPT performs a scalar product between the baseline modal shape (mass-
orthogonalized eigenvector) and each mode shape of the current design. The maximum scalar product
indicates the mode most similar in shape to the original mode selected. To adjust for the mass
orthogonalization, the maximum scalar product is found in the following manner:
(
1
max M 02 φ 0
j
) (M φ )
T 1
2
j j (10.3)
where M is the mass matrix (excluding all rigid bodies), φ is the mass-orthogonalized eigenvector and the
subscript 0 denotes the baseline mode. This product can be extracted with the GENMASS attribute (see
Table 10-4). Rigid body inertia and coupling will be incorporated in Version 2.1.
Example:
Remarks:
1. The user must identify which baseline mode is of interest by viewing the baseline d3eigv file in
LSPOST. LS-OPT must then be run for the baseline (iterate 0) to generate the required modal output
files. These include: massvectout.**, eigvectout.** and eigvalout.**, where ** correspond to the modal
sequence number, e.g. 03 for mode 3. The DynaFreq command must be omitted in the baseline run
since an error will occur in the absence of the baseline data files referred to in point 2 below.
2. The three files massvectout.##, eigvectout.## and eigvalout.## must be renamed: massvectBaseline.##,
eigvalBaseline.## and eigvectBaseline.##, where ## refers to the number of the mode of interest selected
above, i.e. 05 for mode 5. These files must be placed in the working directory before the optimization
starts.
3. The optimization run can now be started with the activated DynaFreq command.
4. Additional files are generated by LS-DYNA and placed in the run directories to perform the scalar
product and extract the modal frequency and number.
This is a generic interface for the extraction of simulation response histories from ASCII data files. Any
variable available in any ASCII data file can be extracted as either a response or a history.
Remarks:
The history and response syntax is the same, except that, for a history, the time attribute is always
TIMESTEP.
1. See Appendix A. The res_type is allowed to have the valid keyword as a substring, e.g. elout_beam.
2. Parameters g and u apply only to the injury components [HIC15|HIC36|CSI]. Not applicable to
histories.
3. Integration point for the elout results. Relates only to the shell element stress, thick shell element and
beam element types. For shell element strain the positions UPPER or LOWER must be specified. See
ELOUT, Appendix A.
4. a. The time dependent response is integrated to find the average (AVE) response:
t2
∫t1 f (t )dt
t 2 −t1
b. The upper time limit is not applicable to the history command, while the time will be ignored in e.g.
5. The SAE, BUTT (Butterworth) or point-wise averaging AVER filters can be applied. If the filter
attribute is not specified, no filtering will be done. If the attribute is specified without a value a 60
cycles/time unit (or 5 point averaging) filter is applied. Note: The user should be careful when
specifying the filtering frequency, for instance if a filter of e.g. 60Hz is desired, but the time units are
milliseconds, a value of 60/1000 = 0.06 must be specified.
Examples:
Example:
The former command requests the maximum value of component 21 over all parts while the latter command
requests the same, but only over parts 1, 2 and 4.
The user must ensure that the d3plot files are produced by the LS-DYNA simulation.
Example:
• The values of some strain points are located above the FLD curve. In this case the constraint is
computed as:
g = dmax
with dmax the maximum smallest distance of any strain point above the FLD curve to the FLD curve.
• All the values of the strain points are located below the FLD curve. In this case the constraint is
computed as:
g = –dmin
with dmin the minimum smallest distance of any strain value to the FLD curve (Figure 10-4).
ε1
d1
d3
d2
Constraint Active
g = dmax
ε2
a) FLD Constraint active
ε1
Constraint Inactive
g = –dmin
d1
d3
d2
ε2
b) FLD Constraint inactive
Figure 10-4: FLD curve – constraint definition
The values of both the principle upper and lower surface in-plane strains are used for the FLD constraint.
The following must be defined for the model and FLD curve:
Example:
A more general FLD criterion is available if the forming limit is represented by a general curve. Any of the
upper, lower or middle shell surfaces can be considered.
Remarks:
1. A piece-wise linear curve is defined by specifying a list of interconnected points. The abscissae (ε2) of
consecutive points must increase (or an error termination will occur). Duplicated points are therefore not
allowed.
2. The curve is extrapolated infinitely in both the negative and positive directions of ε2. The first and last
segments are used for this purpose.
3. The computation of the constraint value is the same as shown in (Figure 10-4).
The following must be defined for the model and FLD curve:
Example:
For all three specifications load curve 23 is used. In the first two specifications, only parts 1, 2 and 3 are
considered.
Remarks:
1. The interface program produces an output file FLD_curve which contains the ε1 and ε2 values in the
first and second columns respectively. Since the program first looks for this file, it can be specified in
lieu of the keyword specification. The user should take care to remove an old version of the
FLD_curve if the curve specification is changed in the keyword input file. If a structured input file is
used for LS-DYNA input data, FLD_curve must be created by the user.
2. The scale factor and offset values feature of the *DEFINE_CURVE keyword are not utilized.
Example:
The Binout commands differ from the DynaASCII commands in that a keyword must be specified to delimit
an item in a command. For example: “–res_type nodout” instead of just “nodout”. The position of
the items in a command is therefore not important for the Binout extraction commands.
The LS-PREPOST Binout capability can be used for the graphical exploration and troubleshooting of the
data.
The response options are an extension of the history options – a history will be extracted as part of the
response extraction.
Results can be extracted for the whole model or a finite element entity such as a node or element. For shell
and beam elements the through-thickness position can be specified as well.
Example:
history 'ELOUT1' "BinoutHistory -res_type Elout -sub shell -cmp sig_xx
-id 1 -pos 1"
history 'invarHis' "BinoutHistory -res_type nodout -cmp displacement
-invariant MAGNITUDE –id 432"
Remarks:
1. The result types and subdirectories are as documented for the *DATABASE_OPTION LS-DYNA
keyword.
2. The component names are as listed in Appendix C: LS-DYNA Binout Result File and Components.
3. The individual components required to compute the invariant will be extracted automatically; for
example, “-cmp displacement –invariant MAGNITUDE” will result in the automatic
extraction of the x, y and z components of the displacement.
4. For the shell and thickshell strain results the upper and lower surface results are written to the database
using the component names such as lower_eps_xx and upper_eps_xx.
These operations will be applied in the following order: averaging, filtering, and slicing.
Example:
history 'ELOUT12' "BinoutHistory -res_type Elout -sub shell -cmp sig_xx
-id 1 -pos 2 -filter SAE –start_time 0.02 –end_time 0.04"
history 'nodHist432acc_AVE' "BinoutHistory -res_type nodout
-cmp x_acceleration -id 432 -ave_points 5"
A response is extracted from a history – all the history options are therefore applicable and options required
for histories are required for responses as well.
Example:
response 'eTime' "BinoutResponse -res_type glstat -cmp kinetic_energy
-select TIME -end_time 0.015"
$
response ‘nodeMax’ "BinoutResponse -res_type nodout -cmp x_acceleration
-id 432 -select MAX -filter SAE -filter_freq 10"
Remarks:
1. The maximum, minimum, average, or value at a specific time must be selected. If selection is TIME
then the end_time history value will be used. If end_time is not specified, the last value (end of
analysis) will be used.
Injury criteria such as HIC can be specified as the result component. The acceleration components will be
extracted, the magnitude computed, and the injury criteria computed from the acceleration magnitude
history.
Example:
response 'HIC_ms' 1 0 "BinoutResponse -res_type Nodout -cmp HIC15
-gravity 9810. -units MS -id 432"
Not all components are available for both the DynaASCII and the Binout extraction routines. In particular
invariants such as the maximum principle stress may not be available in Binout. Some of these invariants
can be constructed using expressions (see Appendix E: Mathematical Expressions). Error and warning
messages will be generated.
set Binout
Example:
$ DynaASCII commands following this command should be
$ translated to BinoutResponse and BinoutHistory commands
set Binout
10.11 DynaStat*
When using LS-OPT for reliability-based design optimization, statistical quantities like the standard
deviation of responses must be extracted. This command is only available in the current batch version. The
syntax is:
• The C language:
or
print "$output_value\n";
The string “N o r m a l” must be written to the standard error file identifier (stderr in C) to signify
a normal termination. (See Section 17.1 for an example).
Examples:
1. The user has an own executable program ”ExtractForce” which is kept in the directory
$HOME/own/bin. The executable extracts a value from a result output file.
2. If Perl is to be used to execute the user script DynaFLD2, the command may be:
Remark:
This chapter describes the specification of objectives and constraints for the design formulation.
11.1 Formulation
Multi-criteria optimal design problems can be formulated. These typically consist of the following:
where F represents the multi-objective function, Φ i = Φ i ( x1 , x2 ,K, xn ) represent the various objective
functions and g j = g j ( x1 , x2 ,K, xn ) represent the constraint functions. The symbols xi represent the n
design variables.
In order to generate a trade-off design curve involving objective functions, more than one objective Φ i
must be specified so that the multi-objective
N
F = ∑ω k Φ k . (11.1)
k =1
A component function must be assigned to each objective function where the component function can be
defined as a composite function F (see Section 10.4) or a response function f . The number of objectives, N,
must be specified in the problem description (see Section 5.2).
135
CHAPTER 11: OBJECTIVES AND CONSTRAINTS
Examples:
objective ’Intrusion_1’
objective ’Intrusion_2’ 2.
objective ’Acceleration’ 3.
for
Multi-objective = F = Φ1 + 2Φ 2 + 3Φ 3
= F1 + 2F2 + 3 f 2
Remarks:
1. The distinction between objectives is made solely for the purpose of constructing a Pareto-optimal curve
involving multiple objectives. However it is still better to construct a Pareto optimal curve using a
varying constraint bound instead of varying weights. See Sections 11.4 and 13.3.
The default is to minimize the objective function. The program can however be set to maximize the
objective function. In LS-OPTui, maximization is activated in the Objective panel.
Maximize
Example:
Maximize
Objective ’Mass’
Constraint ’Acceleration’
constraint constraint_name
Examples:
history ’displacement_1’ "DynaASCII nodout ’r_disp’ 12789 TIMESTEP 0.0 SAE 60"
history ’displacement_2’ "DynaASCII nodout ’r_disp’ 26993 TIMESTEP 0.0 SAE 60"
history ’Intrusion’ {displacement_2 - displacement_1}
response Intrusion_80 {Intrusion(80)}
constraint ’Intrusion_80’
Remark:
Example:
Remark:
1. A flag can be set to identify specific constraint bounds to define a reasonable design space. For this
purpose, the move environment must be specified (See Section 9.4).
Each command functions as an environment. Therefore all lower bound constraint or upper
bound constraint commands which appear after a strict/slack command will be classified as
strict or slack.
In the following example, the first two constraints are slack while the last three are strict. The purpose of the
formulation is to compromise only on the knee forces if a feasible design cannot be found.
Example:
$ Objective:
$-----------
composite ’Knee_Forces’ type weighted
composite ’Knee_Forces’ response ’Left_Knee_Force’ 0.5
composite ’Knee_Forces’ response ’Right_Knee_Force’ 0.5
objective ’Knee_Forces’
$
$ Constraints:
$-------------
SLACK
Constraint ’Left_Knee_Force’
Upper bound constraint ’Left_Knee_Force’ 6500.
$
Constraint ’Right_Knee_Force’
Upper bound constraint ’Right_Knee_Force’ 6500.
$
STRICT
Constraint ’Left_Knee_Displacement’
Lower bound constraint ’Left_Knee_Displacement’ -81.33
$
Constraint ’Right_Knee_Displacement’
Lower bound constraint ’Right_Knee_Displacement’ -81.33
$
Constraint ’Kinetic_Energy’
Upper bound constraint ’Kinetic_Energy’ 154000.
The composite function is explained in Section 10.4. Note that the same response functions appear both
in the objective and the constraint definitions. This is to ensure that the violations to the knee forces are
minimized, but if they are both feasible, their average will be minimized (as defined by the composite).
The constraint bounds of all the soft constraints can also be set to a number that is impossible to comply
with, e.g. zero. This will force the optimization procedure to always ignore the objective and it will
minimize the maximum response.
In the following example, the objective is to minimize the maximum of ’Left Knee Force’ or ’Right
Knee Force’. The displacement and energy constraints are strict.
Example:
$-------------
SLACK
Constraint ’Left_Knee_Force’
Upper bound constraint ’Left_Knee_Force’ 0.
$
Constraint ’Right_Knee_Force’
Upper bound constraint ’Right_Knee_Force’ 0.
$
STRICT
Constraint ’Left_Knee_Displacement’
Lower bound constraint ’Left_Knee_Displacement’ -81.33
$
Constraint ’Right_Knee_Displacement’
Lower bound constraint ’Right_Knee_Displacement’ -81.33
$
Constraint ’Kinetic_Energy’
Upper bound constraint ’Kinetic_Energy’ 154000.
Remarks:
2. The variable bounds of both the region of interest and the design space are always hard.
4. If a feasible design is not possible, the most feasible design will be computed.
5. If feasibility must be compromised (there is no feasible design), the solver will automatically use the
slackness of the soft constraints to try and achieve feasibility of the hard constraints. However, there is
always a possibility that hard constraints must still be violated (even when allowing soft constraints). In
this case, the variable bounds may be violated, which is highly undesirable as the solution will lie
beyond the region of interest and perhaps beyond the design space. This could cause extrapolation of the
response surface or worse, a future attempt to analyze a design which is not analyzable, e.g. a sizing
variable might have become zero or negative.
6. Soft and strict constraints can also be specified for search methods. If there are feasible designs with
respect to hard constraints, but none with respect to all the constraints, including soft constraints, the
most feasible design will be selected. If there are no feasible designs with respect to hard constraints, the
problem is ‘hard-infeasible’ and the optimization terminates with an error message.
This chapter explains simulation job-related information and how to start an optimization run from the
graphical user interface.
The optimization process is triggered by the iterate command in the input file or by the Run command
in the Run panel in LS-OPTui (Figure 12-1). The optimization history is written to the
OptimizationHistory file and can be viewed using the View panel.
Refer to Section 16.1 for the modification of the stopping type in the Command File.
143
CHAPTER 12: RUNNING THE OPTIMIZATION PROBLEM
12.3 Restarting
When a solution is interrupted (through the Stop button) or if a previous optimization run is to be repeated
from a certain starting iteration, this can be specified in the appropriate field in the Run panel (Figure 12-1).
When using LS-DYNA, the user can also view the progress (time history) of the analysis by selecting one of
the available quantities (Time Step, Kinetic Energy, Internal Energy, etc.).
This chapter describes the viewing of metamodeling accuracy, optimization history, trade-off and ANOVA
results.
The View panel in LS-OPTui is used to view the results of the optimization process. The results include the
metamodelling accuracy data, optimization history of the variables, dependents, responses, constraints and
objective(s). Trade-off data can be generated using the existing response surfaces, and ANOVA results can
be viewed.
There are three options for viewing accuracy and tradeoff (anthill plots), namely viewing data for the
current iteration, for all previous iterations simultaneously, all iterations (see e.g. Figure 13-1). The last
option will also show the last verification point (optimal design) in green.
147
CHAPTER 13: VIEWING RESULTS
By clicking on any of the red squares, the data of the selected design point is listed. For LS-DYNA results,
LS-PREPOST can then be launched to investigate the simulation results.
Trade-off studies can also be conducted based on the results of an optimization run. This is because the
response surfaces for each response are at that stage available at each iteration for rapid evaluation.
Trade-off is performed in LS-OPTui using the View panel and selecting Trade-off (Figure 13-3).
Trade-off curves can be developed using either constraints or objectives. The curve can be plotted with any
of the variables, responses, composites, constraints or objectives on either of the two axes. Care should be
taken when selecting e.g. a certain constraint for plotting, as it may also be either a response or composite,
and that this value maybe different from the constraint value, depending on whether the constraint is active
during the trade-off process. The example in the picture below has Constraint: Intrusion selected for the
X-Axis Entity, and not Composite: Intrusion.
The ANOVA results are viewed in bar chart format by clicking on the ANOVA button. The ANOVA panel
is shown in Figure 13-4.
This chapter provides a brief description of some of the applications of optimization that can be performed
using LS-OPT. It should be read in conjunction with Chapter 17, the Examples chapter, where the
applications are illustrated with practical examples.
All variable definitions are defined first, as when solving non-MDO problems, regardless of whether they
belong to all disciplines or solvers. This means that the variable starting value, bounds (minimum and
maximum) and range (sub-region size) are defined together. If a variable is not shared by all disciplines,
however, i.e., it belongs to some but not all of the disciplines (solvers), then it is flagged using the syntax
local variable_name. At this stage, no mention is made in the command file to which solver(s) the
particular variable belongs. This reference is made under the solver context, where the syntax Solver
variable variable_name is used, see next paragraph and example below.
To limit the scope of a variable, an experimental design or job information to a particular solver, the prefix
solver should be applied to the commands below. The solver definition must precede any commands
having the solver prefix. Omission of the prefix implies that the specification is multidisciplinary, i.e., it
is shared between all the specified solvers.
153
CHAPTER 14: APPLICATIONS OF OPTIMIZATION
Variable
Concurrent jobs
Order
Experiment design
Basis experiment
Number Basis experiment
Number experiment
Queuer
Update doe
Experiment duplicate
See the examples in Sections 17.6 and 17.7 for the command file format.
15.1 Introduction
Probabilistic evaluations investigate the effects of variations of the system parameters on the system
responses.
The variation of the system parameters is described using design variables with probabilistic distributions
describing their variation around the mean design variable value. The concept of noise variables, which vary
only according to a statistical distribution and of which the value is not under the control of the analyst, is
introduced in addition to the control variables.
Accordingly, the system responses will vary according to some statistical distribution. From this
distribution, information such as the nominal value of the response, reliability, and extreme values is
inferred.
More background on the probabilistic methods is given in the theoretical manual (Section 3).
157
CHAPTER 15: PROBABILISTIC MODELING AND MONTE CARLO SIMULATION
Item Description
name Distribution name
mu Mean value
sigma Standard deviation
Example:
Item Description
name Distribution name
lower Lower bound
upper Upper bound
Example:
The probability density is to be assumed piecewise uniform and the cumulative distribution to be piecewise
linear. Either the PDF or the CDF data can be given:
• PDF distribution: The value of the distribution and the probability at this value must be provided
for a given number of points along the distribution. The probability density is assumed to be
piecewise uniform at this value to halfway to the next value; both the first and last probability must
be zero.
• CDF distribution: The value of the distribution and the cumulative probability at this value must be
provided for a given number of points along the distribution. It is assumed to vary piecewise
linearly. The first and last value in the file must be 0.0 and 1.0 respectively.
Lines in the data file starting with the character ‘$’ will be ignored.
Item Description
name Distribution name
filename Name of file containing the distribution data
Example:
-5 0.00000
-2.5 0.11594
0 0.14493
2.5 0.11594
$ Last PDF value must be 0
5 0.00000
Item Description
name Distribution name
mu Mean value in logarithmic domain
sigma Standard deviation in logarithmic domain
Example:
Item Description
name Distribution name
scale Scale parameter
shape Shape parameter
Example:
• Control variables: Variables that can be controlled in the design, analysis, and production level; for
example: a shell thickness. It can therefore be assigned a nominal value and will have a variation
around this nominal value. The nominal value can be adjusted during the design phase in order to
have a more suitable design.
• Noise variables: Variables that are difficult or impossible to control at the design and production
level, but can be controlled at the analysis level; for example: loads and material variation. A noise
variable will have the nominal value as specified by the distribution, that is follow the distribution
exactly.
Item Description
variableName Variable identifier
distributionName Distribution identifier
Example:
If the nominal value of a control variable is specified, then this value is used; the associated distribution will
be used to describe the variation around this nominal value. For example: a variable with a nominal value of
7 is assigned a normal distribution with µ=0 and σ=2; the results values of the variable will be normally
distributed around a nominal value of 7 with a standard deviation of 2.
This behavior is only applicable to control variables; noise variables will always follow the specified
distribution exactly.
A noise variable is bounded by the distribution specified and does not have upper and lower bounds similar
to control variables. However bounds are required for the construction of the approximating functions and
are chosen as described in the next subsection.
Item Description
state Whether the bounds must be enforced for the probabilistic
component of the variable.
Example:
Item Description
standardDeviations The subregion size in standard deviations for the noise
variable.
Example:
• Monte Carlo.
• Monte Carlo using metamodels.
The upper and lower bounds on constraints will be used as failure values for the reliability computations.
The user must specify the experimental design strategy (sampling strategy) to be used in the Monte Carlo
evaluation. The Monte Carlo, Latin Hypercube and space-filling experimental designs are available. The
experimental design will first be computed in a normalized, uniformly distributed design space and then
transformed to the distributions specified for the design variables.
Only variables with a statistical distribution will be perturbed; all other variables will be considered at their
nominal value.
The exact value at each point will be used. Composite functions referring to responses in more than one
discipline will not be computed because the experimental designs will differ across disciplines.
Example:
The number of function evaluations using the metamodels can be set by the user. The default value is 106.
The designs to be analyzed are chosen randomly respecting the distributions of the design variables.
Example:
Item Description
m Number of sample values
Example:
set reliability resolution 1000
This chapter describes the parameter settings for the domain reduction and LFOPC methods that are used in
LS-OPT. The default parameters for both the domain reduction scheme and the core optimization algorithm
(LFOPC) should be sufficient for most optimization applications. The following sections describe how to
modify the default settings. These can only be modified using the command language.
The following parameters can be adjusted (refer also to Section 2.12). A suitable default has been provided
for each parameter but the user should not find it necessary to change any of these parameters.
12
Available in Version 2.1
169
CHAPTER 16: OPTIMIZATION ALGORITHM SELECTION AND SETTINGS
The iterative process is terminated if the following convergence criteria become active:
f ( k ) − f ( k −1)
<εf
f ( k −1)
and/or
x ( k ) − x ( k −1)
<ε x
d
where x refers to the vector of design variables, d is the size of the design space, f denotes the value of the
objective function and, (k) and (k – 1) refer to two successive iteration numbers. The stoppingtype
parameter is used to determine whether (and) or (or) will be used, e.g.
implies that the optimization will terminate when either criterion is met.
The range limit can be used to specify the minimum size of the region of interest. This is not a stopping
criterion so that the solver will still continue to iterate until any of the other stopping criteria are met.
An application of the range limit is to maintain a constant tolerance on the random variables. See Section
17.2.9 for an example.
Example:
The values of the responses are scaled with the values at the initial design. The default parameters in
LFOPC should therefore be adequate. Should the user have more stringent requirements, the following
parameters may be set for LFOPC. These are only available in the command input file.
Remarks:
1. For higher accuracy, at the expense of economy, the value of µ max can be increased. Since the
optimization is done on approximate functions, economy is usually not important. The value of steps
must then be increased as well.
2. The optimization is terminated when either of the convergence criteria becomes active that is when
∆(x ) < ε x
or
∇f ( x ) < ε f
3. It is recommended that the maximum step size, δ, be of the same order of magnitude as the “diameter of
the region of interest”. To enable a small step size for the successive approximation scheme, the value of
∑
n
delt has been defaulted to δ = 0.05 i =1
(range) 2 .
4. If print = steps + 1, then the printing is done on step 0 and exit only. The values of the design
variables are suppressed on intermediate steps if print < 0.
Example:
In the case of an infeasible optimization problem, the solver will find the most feasible design within the
given region of interest bounded by the simple upper and lower bounds. A global solution is attempted by
multiple starts from a set of random points.
173
174 LS-OPT Version 2
17. Example Problems
This example problem as shown in Figure 17-1 has one geometric and one element sizing variable.
x1
x2 x2
175
CHAPTER 17: EXAMPLE PROBLEMS
The problem is statically determinate. The forces on the members depend only on the geometric variable.
There are two design variables: x1 the cross-sectional area of the bars, and x2 half of the distance (m)
between the supported nodes. The lower bounds on the variables are 0.2cm2 and 0.1m, respectively. The
upper bounds on the variables are 4.0cm2 and 1.6m, respectively.
f ( x) = C1 x1 1 + x22 (17.1)
The stresses in the members are constrained to be less than 100 MPa.
8 1
σ 1 ( x) = C 2 1 + x22 + ≤ 1 (17.2)
x1 x1 x2
8 1
σ 2 ( x) = C2 1 + x22 − ≤ 1 (17.3)
x1 x1 x2
Only the first stress constraint is considered since it will always have the larger value.
The C language is used for the simulation program. The following two programs simulate the weight
response and stress response respectively.
gw.c
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#define NUMVAR 2
exit (0);
}
gs.c
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#define NUMVAR 2
exit (0);
}
The UNIX script program 2bar_com runs the C-programs gw and gss using the design variable file
XPoint which is resident in each run directory, as input. For practical purposes, 2bar_com, gw and gs
have been placed in a directory above the working directory (or three directories above the run directory).
Hence the references ../../../2bar_com, ../../../gw, etc. in the LS-OPT input file.
Note the output of the string "N o r m a l" so that the completion status may be recognized.
2bar_com:
../../../gw `cat XPoint` >wt; ../../../gss `cat XPoint` >str
The UNIX extraction scripts get_wt and get_str are defined as user interfaces:
get_wt:
cat wt
get_str:
cat str
In Sections 17.1.2 to 17.1.4, a typical semi-automated optimization procedure is illustrated. Section 17.1.5
shows how a trade-off study can be conducted, while the last subsection 17.1.6 shows how an automated
procedure can be specified for this example problem.
The first iteration is chosen to be linear. The input file for LS-OPT given below. The initial design is located
at x = (2.0, 0.8).
A summary of the response surface statistics from the output file is given:
The accuracy of the response surfaces can also be illustrated by plotting the predicted results vs. the
computed results (Figure 17-2).
The R2 values are large. However the prediction accuracy, especially for weight, seems to be poor, so that a
higher order of approximation will be required.
Nevertheless an improved design is predicted with the constraint value (stress) changing from an
approximate 4.884 (severely violated) to 1.0 (the constraint is active). Due to inaccuracy, the actual
constraint value of the optimum is 0.634. The weight changes from 2.776 to 4.137 (3.557 computed) to
accommodate the violated stress:
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Area 0.2 3.539 4
Base 0.1 0.1 1.6
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Weight | 3.557 4.137| 3.557 4.137|
Stress | 0.6338 1| 0.6338 1|
--------------------------------|----------|----------|----------|----------|
OBJECTIVE:
---------
Computed Value = 3.557
Predicted Value = 4.137
OBJECTIVE FUNCTIONS:
-------------------
OBJECTIVE NAME | Computed Predicted WT.
--------------------------------|----------|----------|----
Weight | 3.557 4.137| 1
--------------------------------|----------|----------|----
CONSTRAINT FUNCTIONS:
--------------------
CONSTRAINT NAME | Computed | Predicted| Lower | Upper |Viol?
--------------------------------|----------|----------|----------|----------|-----
Stress | 0.6338 1| -1e+30 1|no
--------------------------------|----------|----------|----------|----------|-----
CONSTRAINT VIOLATIONS:
---------------------
| Computed Violation | Predicted Violation |
CONSTRAINT NAME |----------|----------|----------|----------|
| Lower | Upper | Lower | Upper |
--------------------------------|----------|----------|----------|----------|
Stress | - - | - - |
--------------------------------|----------|----------|----------|----------|
MAXIMUM VIOLATION:
------------------
| Computed | Predicted |
Quantity |---------------------------|---------------------------|
| Constraint Value | Constraint Value |
-------------------|----------------|----------|----------------|----------|
Maximum Violation |Stress 0|Stress 6.995e-08|
Smallest Margin |Stress 0.3662|Stress 6.995e-08|
-------------------|----------------|----------|----------------|----------|
To improve the accuracy, a second run is conducted using a quadratic approximation. The following
statements differ from the input file above:
The approximation results have improved considerably, but the stress approximation is still poor.
Approximating Response 'Weight' using 10 points (ITERATION 1)
----------------------------------------------------------------
Global error parameters of response surface
-------------------------------------------
Quadratic Function Approximation:
---------------------------------
Mean response value = 2.8402
An improved design is predicted with the constraint value (stress) changing from a computed 0.734 to 1.0
(the approximate constraint becomes active). Due to inaccuracy, the actual constraint value of the optimum
is a feasible 0.793. The weight changes from 2.561 to 1.925 (1.907 computed).
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Area 0.2 1.766 4
Base 0.1 0.4068 1.6
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Weight | 1.907 1.925| 1.907 1.925|
Stress | 0.7927 1| 0.7927 1|
--------------------------------|----------|----------|----------|----------|
OBJECTIVE:
---------
Computed Value = 1.907
Predicted Value = 1.925
OBJECTIVE FUNCTIONS:
-------------------
OBJECTIVE NAME | Computed Predicted WT.
--------------------------------|----------|----------|----
Weight | 1.907 1.925| 1
--------------------------------|----------|----------|----
CONSTRAINT FUNCTIONS:
--------------------
CONSTRAINT NAME | Computed | Predicted| Lower | Upper |Viol?
--------------------------------|----------|----------|----------|----------|-----
Stress | 0.7927 1| -1e+30 1|YES
--------------------------------|----------|----------|----------|----------|-----
CONSTRAINT VIOLATIONS:
---------------------
| Computed Violation | Predicted Violation |
CONSTRAINT NAME |----------|----------|----------|----------|
| Lower | Upper | Lower | Upper |
--------------------------------|----------|----------|----------|----------|
Stress | - - | - 1.033e-06|
--------------------------------|----------|----------|----------|----------|
MAXIMUM VIOLATION:
------------------
| Computed | Predicted |
Quantity |---------------------------|---------------------------|
| Constraint Value | Constraint Value |
-------------------|----------------|----------|----------------|----------|
Maximum Violation |Stress 0|Stress 1.033e-06|
Smallest Margin |Stress 0.2073|Stress 1.033e-06|
-------------------|----------------|----------|----------------|----------|
It seems that further accuracy can only be obtained by reducing the size of the subregion. In the following
analysis, the current optimum (1.766; 0.4086) was used as a starting point while the region of interest was
cut in half. The order of the approximation is quadratic. The modified statements are:
"2BAR3: Two-Bar Truss: Reducing the region of interest"
$ Created on Thu Jul 11 07:46:24 2002
$
$ DESIGN VARIABLES
Range 'Area' 2
Range 'Base' 0.8
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Weight | 1.642 1.627| 1.642 1.627|
Stress | 0.9614 1| 0.9614 1|
--------------------------------|----------|----------|----------|----------|
OBJECTIVE:
---------
Computed Value = 1.642
Predicted Value = 1.627
OBJECTIVE FUNCTIONS:
-------------------
OBJECTIVE NAME | Computed Predicted WT.
--------------------------------|----------|----------|----
Weight | 1.642 1.627| 1
--------------------------------|----------|----------|----
CONSTRAINT FUNCTIONS:
--------------------
CONSTRAINT NAME | Computed | Predicted| Lower | Upper |Viol?
--------------------------------|----------|----------|----------|----------|-----
Stress | 0.9614 1| -1e+30 1|no
--------------------------------|----------|----------|----------|----------|-----
CONSTRAINT VIOLATIONS:
---------------------
| Computed Violation | Predicted Violation |
CONSTRAINT NAME |----------|----------|----------|----------|
| Lower | Upper | Lower | Upper |
--------------------------------|----------|----------|----------|----------|
Stress | - - | - - |
--------------------------------|----------|----------|----------|----------|
An improved design is predicted with the constraint value (stress) changing from an approximate 0.8033
(0.7928 computed) to 1.0 (the approximate constraint becomes active). Due to inaccuracy, the actual
constraint value of the optimum is a feasible 0.961. This value is now much closer to the value of the
simulation result. The weight changes from 1.909( 1.907 computed) to 1.627 (1.642 computed).
The present region of interest (2; 0.8) is chosen in order to conduct a study in which the weight is traded off
against the stress constraint. The trade-off is performed by selecting the Trade-off option in the View panel
of LS-OPTui.
The upper bound of the stress constraint is varied from 0.2 to 2.0 with 20 increments. Select Constraint as
the Trade-off option and enter the bounds and number of increments. Generate the trade-off. This initiates
the solution of a series of optimization problems using the response surface generated in Section 17.1.4,
with the constraint in each (constant coefficient of the constraint response surface polynomial) being varied
between the limits selected. The resulting curve is also referred to as a Pareto optimality curve. When
plotting, select the ‘Constraint’ Stress, and not the ‘Response’ Stress, as the latter represents only the left-
hand side of the constraint equation (17.2).
The resulting trade-off diagram (Figure 17-4) shows the compromise in weight when the stress constraint is
tightened.
This section illustrates the automation of the design process for both a linear and a quadratic response
surface approximation order. 10 iterations are performed for the linear approximation, with only 5 iterations
performed for the more expensive quadratic approximation.
Variable 'Area' 2
Range 'Area' 4
Variable 'Base' 0.8
Range 'Base' 1.6
$
$ EXPERIMENTAL DESIGN
$
Order linear
Number experiment 5
$
$ JOB INFO
$
iterate 10
Order quadratic
Number experiment 10
$
$ JOB INFO
$
iterate 5
The optimization histories have been plotted to illustrate convergence in Figure 17-5.
Remarks:
1. Note that the more accurate but more expensive quadratic approximation converges in about 3
design iterations (30 simulations), while it takes about 7 iterations (35 simulations) for the objective
of the linear case to converge.
2. In general, the lower the order of the approximation, the more iterations are required to refine the
optimum.
17.2.1 Introduction
This example considers the crashworthiness of a simplified small car model. A simplified vehicle moving at
a constant velocity of 15.64m.s-1 (35mph) impacts a rigid pole. See Figure 17-6. The thickness of the front
nose above the bumper is specified as part of the hood. LS-DYNA is used to perform a simulation of the
crash for a simulation duration of 50ms.
Hood
Bumper
The objective is to minimize the Head Injury Criterion (HIC) over a 15ms interval of a selected point
subject to an intrusion constraint of 550mm of the pole into the vehicle at 50ms. The HIC is based on linear
head acceleration and is widely used in occupant safety regulations in the automotive industry as a brain
injury criterion. In summary, the criteria of interest are the following:
The design variables are the shell thickness of the car front (t_hood ) and the shell thickness of the bumper
(t_bumper) (see Figure 17-6).
Minimize
HIC (15ms) (17.4)
subject to
Intrusion (50ms) < 550mm
The intrusion is measured as the difference between the displacement of nodes 167 and 432.
Remark:
• The mass is computed but not constrained. This is useful for monitoring the mass changes.
17.2.4 Modeling
The simulation is performed using LS-DYNA. An extract from the parameterized input deck is shown
below. Note how the design variables are labeled for substitution through the characters << >>. The cylinder
for impact is modeled as a rigid wall.
$
$ DEFINITION OF MATERIAL 1
$
*MAT_PLASTIC_KINEMATIC
1,1.000E-07,2.000E+05,0.300,400.,0.,0.
0.,0.,0.
*HOURGLASS
1,0,0.,0,0.,0.
*SECTION_SHELL
1,2,0.,0.,0.,0.,0
2.00,2.00,2.00,2.00,0.
*PART
material type # 3 (Kinematic/Isotropic Elastic-Plastic)
1,1,1,0,1,0
$
$ DEFINITION OF MATERIAL 2
$
*MAT_PLASTIC_KINEMATIC
2,7.800E-08,2.000E+05,0.300,400.,0.,0.
0.,0.,0.
*HOURGLASS
2,0,0.,0,0.,0.
*SECTION_SHELL
2,2,0.,0.,0.,0.,0
<<t_bumper>>,<<t_bumper>>,<<t_bumper>>,<<t_bumper>>,0.
*PART
material type # 3 (Kinematic/Isotropic Elastic-Plastic)
2,2,2,0,2,0
$
$ DEFINITION OF MATERIAL 3
$
*MAT_PLASTIC_KINEMATIC
3,7.800E-08,2.000E+05,0.300,400.,0.,0.
0.,0.,0.
*HOURGLASS
3,0,0.,0,0.,0.
*SECTION_SHELL
3,2,0.,0.,0.,0.,0
<<t_hood>>,<<t_hood>>,<<t_hood>>,<<t_hood>>,0.
*PART
material type # 3 (Kinematic/Isotropic Elastic-Plastic)
3,3,3,0,3,0
$
$ DEFINITION OF MATERIAL 4
$
*MAT_PLASTIC_KINEMATIC
4,7.800E-08,2.000E+05,0.300,400.,0.,0.
0.,0.,0.
*HOURGLASS
4,0,0.,0,0.,0.
*SECTION_SHELL
4,2,0.,0.,0.,0.,0
<<t_hood>>,<<t_hood>>,<<t_hood>>,<<t_hood>>,0.
*PART
material type # 3 (Kinematic/Isotropic Elastic-Plastic)
4,4,4,0,4,0
$
$ DEFINITION OF MATERIAL 5
$
*MAT_PLASTIC_KINEMATIC
5,7.800E-08,2.000E+05,0.300,400.,0.,0.
0.,0.,0.
*HOURGLASS
5,0,0.,0,0.,0.
*SECTION_SHELL
5,2,0.,0.,0.,0.,0
<<t_hood>>,<<t_hood>>,<<t_hood>>,<<t_hood>>,0.
*PART
material type # 3 (Kinematic/Isotropic Elastic-Plastic)
5,5,5,0,5,0
$
A design space of [1; 5] is used for both design variables with no range specified. This means that the range
defaults to the whole design space. The LS-OPT input file is as follows:
"Small Car Problem: EX4a"
$ Created on Mon Aug 26 19:11:06 2002
solvers 1
responses 5
$
$ NO HISTORIES ARE DEFINED
$
$
$ DESIGN VARIABLES
$
variables 2
Variable 't_hood' 1
Lower bound variable 't_hood' 1
Upper bound variable 't_hood' 5
Variable 't_bumper' 3
Lower bound variable 't_bumper' 1
Upper bound variable 't_bumper' 5
$
$ DEFINITION OF SOLVER "1"
$
solver dyna '1'
solver command "lsdyna"
solver input file "car5.k"
solver append file "rigid2"
solver order linear
solver experiment design dopt
solver number experiments 5
solver basis experiment 3toK
solver concurrent jobs 1
$
$ RESPONSES FOR SOLVER "1"
$
response 'Acc_max' 1 0 "DynaASCII Nodout X_ACC 432 Max SAE 60"
response 'Acc_max' linear
response 'Mass' 1 0 "DynaMass 2 3 4 5 MASS"
response 'Mass' linear
response 'Intru_2' 1 0 "DynaASCII Nodout X_DISP 432 Timestep"
response 'Intru_2' linear
response 'Intru_1' 1 0 "DynaASCII Nodout X_DISP 167 Timestep"
response 'Intru_1' linear
response 'HIC' 1 0 "DynaASCII Nodout HIC15 9810. 1 432"
response 'HIC' linear
$
$ NO HISTORIES DEFINED FOR SOLVER "1"
$
$
$ HISTORIES AND RESPONSES DEFINED BY EXPRESSIONS
$
composites 1
composite 'Intrusion' type weighted
The computed vs. predicted HIC and Intru_2 responses are given in Figure 17-7. The corresponding R2
value for HIC is 0.9248, while the RMS error is 27.19%. For Intru_2, the R2 value is 0.9896, while the
RMS error is 0.80%.
Baseline:
---------------------------------------
ITERATION NUMBER (Baseline)
---------------------------------------
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
t_hood 1 1 5
t_bumper 1 3 5
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Acc_max | 8.345e+04 1.162e+05| 8.345e+04 1.162e+05|
Mass | 0.4103 0.4103| 0.4103 0.4103|
Intru_2 | -736.7 -738| -736.7 -738|
Intru_1 | -161 -160.7| -161 -160.7|
HIC | 68.26 74.68| 68.26 74.68|
--------------------------------|----------|----------|----------|----------|
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Acc_max | 1.248e+05 1.781e+05| 1.248e+05 1.781e+05|
Mass | 0.6571 0.657| 0.6571 0.657|
Intru_2 | -713.7 -711.4| -713.7 -711.4|
Intru_1 | -164.6 -161.4| -164.6 -161.4|
HIC | 126.7 39.47| 126.7 39.47|
--------------------------------|----------|----------|----------|----------|
The LS-OPT input file is modified as follows (the response approximations are all quadratic (not
shown)):
Order quadratic
Experimental design dopt
Basis experiment 5toK
Number experiment 10
For very expensive simulations, if previous extracted simulation is available, as, e.g., from the previous
linear iteration in Section 17.2.5, then these points can be used to reduce the computational cost of this
quadratic approximation. To do this, the previous AnalysisResults.1 file is copied to the current
work directory and renamed AnalysisResults.PRE.1.
As is shown in the results below, the computed vs. predicted HIC and Intru_2 responses are is now
improved from the linear approximation. The accuracy of the HIC and Intru_2 responses are given in
Figure 17-8. The corresponding R2 value for HIC is 0.9767, while the RMS error is 10.28%. For Intru_2,
the R2 value is 0.9913, while the RMS error is 0.61%. When conducting trade-off studies, a higher-order
approximation like the current one will be preferable. See trade-off of HIC versus intrusion in a range
450mm to 600mm, in Figure 17-8c).
Baseline:
---------------------------------------
ITERATION NUMBER (Baseline)
---------------------------------------
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
t_hood 1 1 5
t_bumper 1 3 5
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Acc_max | 8.345e+04 1.385e+05| 8.345e+04 1.385e+05|
Mass | 0.4103 0.4103| 0.4103 0.4103|
Intru_2 | -736.7 -736| -736.7 -736|
Intru_1 | -161 -160.3| -161 -160.3|
HIC | 68.26 10.72| 68.26 10.72|
--------------------------------|----------|----------|----------|----------|
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Acc_max | 1.576e+05 1.985e+05| 1.576e+05 1.985e+05|
Mass | 0.6017 0.6018| 0.6017 0.6018|
Intru_2 | -712.7 -711.9| -712.7 -711.9|
Intru_1 | -163.3 -161.9| -163.3 -161.9|
HIC | 171.4 108.2| 171.4 108.2|
--------------------------------|----------|----------|----------|----------|
An automated optimization is performed with a linear approximation. The LS-OPT input file is modified as
follows:
Order linear
Experimental design dopt
Basis experiment 3toK
Number experiment 5
iterate 8
It can be seen in Figure 17-9 that the objective function (HIC) and intrusion constraint are approximately
optimized at the 5th iteration. It takes about 8 iterations for the approximated (solid line) and computed
(square symbols) HIC to correspond. The approximation improves through the contraction of the subregion.
As the variable t_hood never moves to the edge of the subregion during the optimization process, the
heuristic in LS-OPT enforces pure zooming (see Figure 17-10). For t_bumper, panning occurs as well due
to the fact that the linear approximation predicts a variable on the edge of the subregion.
In order to build a more accurate response surface for trade-off studies, the Neural Net method is chosen
under the ExpDesign panel. This results in a feed-forward (FF) neural network (Section 0) being solved for
the points selected. The recommended point selection scheme (Space Filling) is used. One iteration is
performed to analyze only one experimental design with 25 points. The modifications to the command input
file are as follows:
$
$ DEFINITION OF SOLVER "1"
$
solver dyna '1'
solver command "lsdyna"
solver input file "car5.k"
solver append file "rigid2"
solver order FF
solver update doe
solver experiment design space_filling
solver number experiments 25
iterate 1
The response surface accuracy is illustrated in Figure 17-11 for the HIC and Intru_2 responses. The HIC
has more scatter than Intru_2 for the 25 design points used.
A trade-off study considers a variation in the Intrusion constraint (originally fixed at 550mm) between 450
and 600mm, the same as in Figure 17-8c). The experimental design used for the responses in Figure 17-11
is shown in Figure 17-12. The effect of the Space-Filling algorithm in maximizing the minimum distance
between the experimental design points can clearly be seen from the evenly distributed design. The resulting
Pareto optimality curves for HIC and the two design variables (t_hood and t_bumper) can be seen in
Figure 17-13. It can be seen that a tightening of the Intrusion constraint increases the HIC value through an
increase of the hood thickness in the optimal design.
The limited reliability-based design optimization in LS-OPT is illustrated in this example. The optimization
problem is modified as follows:
Minimize
HIC (17.5)
where σ HIC and σ Intrusion are the standard deviations of the HIC and Intrusion responses, respectively. The
design space for the two variables are increased from [1; 5] to [1; 6].
The formulation in Eq. (17.5) implies that the car is made safer by 6 standard deviations of the intrusion.
The standard deviation of both the HIC and Intrusion responses is calculated using the procedure outlined in
Section 2.15.5. The resulting command input file is as follows:
responses 10
$
$ NO HISTORIES ARE DEFINED
$
$ DEFINITION OF SOLVER "MEAN"
$
solver dyna 'MEAN'
solver command "lsdyna"
solver input file "car5.k"
solver append file "rigid2"
$
$ RESPONSES FOR SOLVER "1"
$
response 'Intru_2_m' 1 0 "DynaASCII Nodout X_DISP 432 Timestep"
response 'Intru_1_m' 1 0 "DynaASCII Nodout X_DISP 167 Timestep"
response 'Intrusion_m' expression {Intru_1_m - Intru_2_m}
response 'HIC_m' 1 0 "DynaASCII Nodout HIC15 9810. 1 432"
$
$ LOCAL VARIABLES
$
solver variable 't_hood_m'
solver variable 't_bumper_m'
$
$ EXPERIMENTAL DESIGN
$
Solver Order linear
Solver Experimental design dopt
Solver Basis experiment 5toK
Solver Number experiment 5
$
$ JOB INFO
$
concurrent jobs 1
$
$ DEFINITION OF SOLVER "_RANDOM_"
$
solver dyna '_RANDOM_'
solver command "lsdyna"
solver input file "car5s.k"
solver append file "rigid2"
$
$ LOCAL VARIABLES
$
solver variable 't_hood_s'
solver variable 't_bumper_s'
$
$ RESPONSES FOR SOLVER "1"
$
response 'Intru_2_s' 1 0 "DynaASCII Nodout X_DISP 432 Timestep"
response 'Intru_1_s' 1 0 "DynaASCII Nodout X_DISP 167 Timestep"
response 'Intrusion_s' expression {Intru_1_s - Intru_2_s}
response 'HIC_s' 1 0 "DynaASCII Nodout HIC15 9810. 1 432"
response 'Intrusion_dev' "DynaStat STDDEV Intrusion_s"
response 'HIC_dev' "DynaStat STDDEV HIC_s"
$
$ EXPERIMENTAL DESIGN
$
Remarks:
The result of the optimization process is given in Figure 17-14. Shown are both the Intrusion and HIC
responses. The reliability limit on the Intrusion response is shown as a dashed line. This corresponds to the
left-hand side of the constraint in Eq. (17.5), rewritten as
A 6-sigma range is given in the plot for the HIC response. It can be seen that the mean HIC-value is reduced
in the presence of the reliability constraint.
300 630
HIC_mean
HIC (Mean + 6sigma) 620
250 HIC (Mean - 6sigma)
610
Intrusion_mean
Intrusion (Mean + 6sigma)
600
200
590
Intrusion
HIC
150 580
570
100
560
550
50
540
0 530
0 2 4 6 8 10
Iteration Number
The problem consists of a tube impacting a rigid wall as shown in Figure 17-15. The energy absorbed is
maximized subject to a constraint on the rigid wall impact force. The cylinder has a constant mass of 0.54
kg with the design variables being the mean radius and thickness. The length of the cylinder is thus
dependent on the design variables because of the mass constraint. A concentrated mass of 500 times the
cylinder weight is attached to the end of the cylinder not impacting the rigid wall. The deformed shape at
20ms is shown in Figure 17-16 for a typical design.
10m/s
x2 x1
subject to
wall
Fnormal ( x1 , x2 ) average ≤ 70 000
0.52
l ( x) =
2πρx1 x2
where the design variables x1 and x2 are the radius and the thickness of the cylinder respectively.
wall
Einternal ( x) t =0.02 is the objective function and constraint functions Fnormal ( x) average and l(x) are the average
normal force on the rigid wall and the length of the cylinder, respectively.
The problem is simulated using LS-DYNA. The following TrueGrid input file including the <<name>>
statements is used to create the FE input deck with the FE model as shown in Figure 17-16. Note that the
design variables have been scaled.
l2 [75.0/<<Radius>>*0.02]
h2 .002
v0 10.
n .33
pi 3.14159
;
plane 1 0 0 -.002 0 0 1 .001 ston pen 2. stick ;
sid 1 lsdsi 13 slvmat 1;scoef .4 dcoef .4 sfsps 1.5 ; ; ;
c ************** part 1 mat 1 ************* shell
cylinder
-1; 1 60; 1 50 51;
%r
0 360
0 %l [%l2+%l]
dom 1 1 1 1 2 3
x=x+.01*%h*sin(%pi*z*57.3/(%pi*(%r*%r*%h*%h/(12*(1-%n*%n)))**.25))
thick %h
thi ;;2 3; %h2
c bi ; ;-3 0 -3; dx 1 dy 1 rx 1 ry 1 rz 1 ;
c interrupt
swi ;; ;1
velocity 0 0 [-%v0]
mate 1
mti ;; 2 3; 2
c element spring block
epb 1 1 1 1 2 3
endpart
merge
stp .000001
write
end
In the first iteration, a quadratic approximation is chosen from the beginning. The ASCII database is suitable
for this analysis as the energy and impact force can be extracted from the glstat and rwforc databases
respectively. Five processors are available. The region of interest is arbitrarily chosen to be about half the
size of the design space.
The following LS-OPT command input deck was used to find the approximate optimum solution:
"Cylinder Impact Problem"
$ Created on Thu Jul 11 11:37:33 2002
$
$ DESIGN VARIABLES
$
variables 2
Variable 'Radius' 75
Lower bound variable 'Radius' 20
Upper bound variable 'Radius' 100
Range 'Radius' 50
Variable 'Wall_Thickness' 3
Lower bound variable 'Wall_Thickness' 2
The curve-fitting results below show that the internal energy is approximated reasonably well whereas the
average force is poorly approximated. The accuracy plots confirm this result (Figure 17-17).
The initial design below shows that the constraint is severely exceeded.
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius 20 75 100
Wall_Thickness 2 3 6
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Internal_Energy | 1.296e+04 1.142e+04| 1.296e+04 1.142e+04|
Rigid_Wall_Force | 1.749e+05 1.407e+05| 1.749e+05 1.407e+05|
--------------------------------|----------|----------|----------|----------|
Figure 17-17: Prediction accuracy of Internal Energy and Rigid Wall Force (One Quadratic iteration)
Despite the relatively poor approximation a prediction of the optimum is made based on the approximation
response surface. The results are shown below. The fact that the optimal Radius is on the lower bound of
the subregion specified (Range = 50), suggests an optimal value below 50.
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius 20 50 100
Wall_Thickness 2 2.978 6
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Internal_Energy | 7914 8778| 7914 8778|
Rigid_Wall_Force | 4.789e+04 7e+04| 4.789e+04 7e+04|
--------------------------------|----------|----------|----------|----------|
During the previous optimization step, the Radius variable was reduced from 75 to 50 (on the boundary
of the region of interest). It was also apparent that the approximations were fairly inaccurate. Therefore, in
the new iteration, the region of interest is reduced from [50;2] to [35;1.5] while retaining a quadratic
approximation order. The starting point is taken as the current optimum: (50,2.978). The modified
commands in the input file are as follows:
$
$ DESIGN VARIABLES
$
variables 2
Variable 'Radius' 50
Lower bound variable 'Radius' 20
Upper bound variable 'Radius' 100
Range 'Radius' 35
Variable 'Wall_Thickness' 2.9783
Lower bound variable 'Wall_Thickness' 2
Upper bound variable 'Wall_Thickness' 6
Range 'Wall_Thickness' 1.5
As shown below, the accuracy of fit improves but the average rigid wall force is still inaccurate.
Approximating Response 'Internal_Energy' using 10 points (ITERATION 1)
----------------------------------------------------------------
Global error parameters of response surface
-------------------------------------------
Quadratic Function Approximation:
---------------------------------
Mean response value = 8640.2050
R^2 = 0.8949
R^2 (adjusted) = 0.8949
R^2 (prediction) = 0.2204
Determinant of [X]'[X] = 0.0556
Figure 17-18: Prediction accuracy of Internal Energy and Rigid Wall Force (One Quadratic iteration)
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Internal_Energy | 9777 9575| 9777 9575|
Rigid_Wall_Force | 6.417e+04 7e+04| 6.417e+04 7e+04|
--------------------------------|----------|----------|----------|----------|
Because of the large change in the Wall_Thickness on to the upper bound of the region of interest, a
third iteration is conducted, keeping the region of interest the same. The starting point is the previous
optimum:
Because the size of the region of interest remained the same, the curve-fitting results show only a slight
change (because of the new location), in this case an improvement. However, as the optimization results
below show, the design is much improved, i.e. the objective value has increased whereas the approximate
constraint is active. Unfortunately, due to the poor fit of the Rigid_Wall_Force, the simulation result
exceeds the force constraint by about 10kN (14%). Further reduction of the region of interest is required to
reduce the error, or filtering of the force can be considered to reduce the noise on this response.
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius 20 36.51 100
Wall_Thickness 2 4.478 6
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Internal_Energy | 1.129e+04 1.075e+04| 1.129e+04 1.075e+04|
Rigid_Wall_Force | 8.007e+04 7e+04| 8.007e+04 7e+04|
--------------------------------|----------|----------|----------|----------|
The table below gives a summary of the three iterations of the step-by-step procedure.
It is apparent that the result of the second iteration is a dramatic improvement on the starting design and a
good approximation to the converged optimum design.
Because of the poor accuracy of the response surface fit for the rigid wall force above, it was decided to
modify the force constraint so that the peak filtered force is used instead. Therefore, the previous response
definition for Rigid_Wall_Force is replaced with a command that extracts the maximum rigid wall
force from a response from which frequencies exceeding 300Hz are excluded.
As expected, the response histories (Figure 17-19) show that the baseline design is severely infeasible (the
first peak force is about 1.75 x 106 vs. the constraint value of 0.08 x 106. A steady reduction in the error of
the response surfaces is observed up to about iteration 5. The optimization terminates after 16 iterations,
having reached the 1% threshold for both objective and design variable changes.
a) Radius b) Wall_Thickness
c) Internal_Energy d) Rigid_Wall_Force
The optimization process steadily reduces the infeasibility, but the force constraint is still slightly violated
when convergence is reached. The internal energy is significantly lower than previously:
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius 20 20.51 100
Wall_Thickness 2 4.342 6
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Internal_Energy | 8344 8645| 8344 8645|
Rigid_Wall_Force | 8.112e+04 8e+04| 8.112e+04 8e+04|
--------------------------------|----------|----------|----------|----------|
Figure 17-20 below confirms that the final design is only slightly infeasible when the maximum filtered
force exceeds the specified limit for a short duration at around 9ms.
Figure 17-20: Cylinder: Constrained rigid wall force: F(t) < 80000 (SAE 300Hz filtered)
The design parameterization for the sheet metal forming example is shown in Figure 17-21.
F1
punch
F2
r1
r2
die
r3
13
Registered Trademark of XYZ Scientific Applications Inc.
221
CHAPTER 17: EXAMPLE PROBLEMS
The design problem is formulated to minimize the maximum tool radius while also specifying an FLD
constraint and a maximum thickness reduction of 20% (thinning constraint). Since the user wants to enforce
the FLD and thinning constraints strictly, these constraints are defined as strict. To minimize the
maximum radius, a small upper bound for the radii has been specified (arbitrarily chosen as a number close
to the lower bound of the design space, namely 1.1). The optimization solver will then minimize the
maximum difference between the radii and their respective bounds. The radius constraints must not be
enforced strictly. This translates to the following mathematical formulation:
Minimize e
with
1.5 ≤ r1 ≤ 4.5
1.5 ≤ r2 ≤ 4.5
1.5 ≤ r3 ≤ 4.5
subject to
g FLD ( x ) < 0.0
∆t ( x ) < 20%
r1 − 1.1 < e
r2 − 1.1 < e
r3 − 1.1 < e
e > 0.
The initial run is a quadratic analysis designed as an initial investigation of the following issues:
The subregion considered for this study is 2.0 large in r1, r2 and r3 and is centered about (1.5, 1.5, 1.5)T.
The FLD constraint formulation tested in this phase is based on the maximum perpendicular distance of a
point violating the FLD constraint to the FLD curve (see Section 10.8.2).
The file ShellSetList contains commands for LS-DYNA in addition to the preprocessor output. It is
slotted into the input file. Adaptive meshing is chosen as an analysis feature for the simulation. The FLD
curve data is also specified in this file. The extra commands are:
*DATABASE_BINARY_RUNRSF
70
*DATABASE_EXTENT_BINARY
0, 0, 0, 1, 0, 0, 0, 1
0, 0, 0, 0, 0, 0
$
$ SLIDING INTERFACE DEFINITIONS
$
$ TrueGrid Sliding Interface # 1
$
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$ workpiece vs punch
0.1000000 0.000 0.000
1 2 3 3 1
0.0
$
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$ workpiece vs die
1 3 3 3 1 1
0.1000000 0.000 0.000
0.0
$
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$ workpiece vs blankholder
1 4 3 3 1 1
0.1000000 0.000 0.000
0.0
$
*CONTROL_ADAPTIVE
$ ADPFREQ ADPTOL ADPOPT MAXLVL TBIRTH TDEATH LCADP IOFLAG
0.100E-03 5.000 2 3 0.000E+00 1.0000000 0 1
$ ADPSIZE ADPASS IREFLG ADPENE
0.0000000 1 0 3.0000
*LOAD_RIGID_BODY
The input file (file m3.tg.opt) used to generate the FE mesh in Truegrid is:
.
.
.
0.312799990E+00 0.481799988E+03
0.469900012E+00 0.517200012E+03
0.705600023E+00 0.555299988E+03
;
c
c die cross-section
para
c
r1 <<Radius_1>> c upper radius minimum = 2.
r2 <<Radius_2>> c middle radius minimum = 2.
r3 <<Radius_3>> c lower radius minimum = 2.
load2 -100000
load3 -20000
th1 1.0 c thickness of blank
th3 .00 c thickness of die and punch
th2 [1.001*%th1]
l1 20 c length of draw (5-40)
c
z5 [%l1-22]
c Position of workpiece
z4 [%z5+1.001*%th1/2.+%th3/2]
c Position of blankholder
z3 [%z4+1.001*%th1/2.+%th3/2]
n1 [25+4.0*%l1]
n2 [25+8.0*%l1]
c part 2
z6 [%z5+4+%th2]
z7 [%z5+%l1+4+%th2]
;
.
.
.
.
.
.
endpart
c ***************** part 2 mat 2 ********* punch
cylinder
1 8 35 40 67 76 [76+%n1] [70+%n1+10]; 1 41 ; -1 ;
.001 17. 23. 36. 44. 50. 75. 100.
0. 90.
%z7
.
.
.
thick %th3
mate 2
endpart
The error parameters for the fitted functions are given in the following output (from lsopt_output
file):
Approximating Response 'Thinning' using 16 points (ITERATION 1)
----------------------------------------------------------------
Global error parameters of response surface
-------------------------------------------
Quadratic Function Approximation:
---------------------------------
Mean response value = 27.8994
The thinning has a reasonably accurate response surface but the FLD approximation requires further
refinement.
The initial design has the following response surface results which fail the criteria for maximum thinning,
but not for FLD:
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius_1 1 1.5 4.5
Radius_2 1 1.5 4.5
Radius_3 1 1.5 4.5
--------------------------------|-----------|----------|-----------
As shown below, after 1 iteration, a feasible design is generated. The simulation response of the optimum is
closely approximated by the response surface.
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius_1 1 3.006 4.5
Radius_2 1 3.006 4.5
Radius_3 1 3.006 4.5
--------------------------------|-----------|----------|-----------
CONSTRAINT FUNCTIONS:
--------------------
CONSTRAINT NAME | Computed | Predicted| Lower | Upper |Viol?
--------------------------------|----------|----------|----------|----------|-----
FLD | -0.04308 -0.03841| -1e+30 0|no
Rad1 | 3.006 3.006| -1e+30 1.1|YES
Rad2 | 3.006 3.006| -1e+30 1.1|YES
Rad3 | 3.006 3.006| -1e+30 1.1|YES
Thinning_scaled | 0.2172 0.2| -1e+30 0.2|no
--------------------------------|----------|----------|----------|----------|-----
CONSTRAINT VIOLATIONS:
---------------------
| Computed Violation | Predicted Violation |
CONSTRAINT NAME |----------|----------|----------|----------|
| Lower | Upper | Lower | Upper |
--------------------------------|----------|----------|----------|----------|
FLD | - - | - - |
Rad1 | - 1.906| - 1.906|
Rad2 | - 1.906| - 1.906|
The optimization process can also be automated so that no user intervention is required. The starting design,
lower and upper bounds, and region of interest is modified from the 1 iteration study above.
The number of D-optimal experiments is reduced because of the linear approximation used:
Order linear
Experimental design dopt
Basis experiment 3toK
Number experiment 7
The optimization history is shown in Figure 17-23 for the design variables and responses:
Figure 17-23: Optimization history of design variables and responses (automated design)
DESIGN POINT
------------
Variable Name Lower Bound Value Upper Bound
--------------------------------|-----------|----------|-----------
Radius_1 1 2.653 4.5
Radius_2 1 2.286 4.5
Radius_3 1 2.004 4.5
--------------------------------|-----------|----------|-----------
RESPONSE FUNCTIONS:
------------------
| Scaled | Unscaled |
|---------------------|---------------------|
RESPONSE | Computed Predicted| Computed Predicted|
--------------------------------|----------|----------|----------|----------|
Thinning | 19.92 19.6| 19.92 19.6|
FLD | -0.000843 -0.002907| -0.000843 -0.002907|
--------------------------------|----------|----------|----------|----------|
A comparison between the starting and the final values is tabulated below:
The FLD diagrams (Figure 17-24) for the baseline design and the optimum illustrate the improvement of the
FLD feasibility:
A methodology for deriving material parameters from experimental results, known as material parameter
identification, is applied here using optimization. The example has the following features:
The problem [62] is illustrated in Figure 17-26. Shown is an impacting mass (chest form) and a deploying
airbag. The experimental results contain the acceleration of the mass for two impacting velocities, namely 4
and 5m.s-1. The velocity and displacement data are derived from the acceleration through time integration.
Altogether 54 responses, 9 per time history curve (acceleration, velocity and displacement for both
impacting velocities), are used in the regression. This represents a monitoring increment of 5ms. The design
variables (x) are the ordinates on the leakage coefficient-pressure curve that is used as a material load curve
in LS-DYNA, when simulating the impact depicted in Figure 17-26. Results are shown for both 5 and 10
design variables. The load curve used is implemented as a piece-wise linear table lookup of the leakage
curve data.
2
f j ( x) − Fj
R
LSR = F = ∑
j =1 Γj
where Fj are the experimental targets and Γ j are scaling factors required for the normalization or weighting
of each respective response. In addition, the simulated acceleration and displacement data (fj(x)) are scaled
to match the experimental units, while the displacement data are offset to ensure that the quantity:
f j ( x) − F j
Γj
In this formulation, the deviations from the respective target values are incorporated as constraint violations,
so that the optimization problem for parameter identification becomes:
Minimize e,
subject to
f j ( x )− F j
≤ e ; j = 1,…,54
Γj
e≥0
This formulation is automatically activated in LS-OPT without specifying the objective function as the
maximum constraint violation. This is due to the fact that an auxiliary problem is solved internally
whenever an infeasible design is found, ignoring the objective function until a feasible design is found.
When used in parameter identification, all the constraints are in general never completely satisfied due to
typically over-determined systems that are used.
Both formulations have one additional set of constraints in this example, namely to ensure monotonicity of
the load curve to be developed, i.e.
where p = 54 is the number of experimental collocation points. The monotonicity constraints are strictly
enforced using the ‘strict’ option in LS-OPT, i.e. they do not contain the slack variable, e. This means that
the constraints at the collocation points are compromised at the cost of satisfying the monotonicity
constraints.
17.5.4 Implementation
The LS-OPT input files for this problem are shown below:
Note the use of a Targeted Composite for the objective (‘Residual’). The initial leakage curve is horizontal.
The two LS-DYNA solvers (4MPS and 5MPS) with input decks (sim4mpros.inp and sim5mpros.inp)
refer to the two cases, i.e., 4 and 5 m.s-1 approach velocity, respectively.
The LS-OPT input file is the same for this formulation as before for all sections but the constraints. As the
formulation is driven by the violation of the constraints, they are defined as equality constraints (identical
upper and lower bounds). In addition, the monotonicity constraints are defined as hard by using the ‘strict’
option:
$
$ CONSTRAINT DEFINITIONS
$
constraints 58
constraint 'acc10_4'
lower bound constraint 'acc10_4' 0.449417
upper bound constraint 'acc10_4' 0.449417
constraint 'acc15_4'
lower bound constraint 'acc15_4' 0.754247
upper bound constraint 'acc15_4' 0.754247
constraint 'acc20_4'
lower bound constraint 'acc20_4' 1.1858
upper bound constraint 'acc20_4' 1.1858
constraint 'acc25_4'
lower bound constraint 'acc25_4' 1.76239
upper bound constraint 'acc25_4' 1.76239
constraint 'acc30_4'
lower bound constraint 'acc30_4' 2.21678
upper bound constraint 'acc30_4' 2.21678
constraint 'acc35_4'
lower bound constraint 'acc35_4' 2.27923
upper bound constraint 'acc35_4' 2.27923
constraint 'acc40_4'
lower bound constraint 'acc40_4' 1.82374
upper bound constraint 'acc40_4' 1.82374
constraint 'acc45_4'
lower bound constraint 'acc45_4' 1.218
upper bound constraint 'acc45_4' 1.218
constraint 'acc50_4'
lower bound constraint 'acc50_4' 0.727288
upper bound constraint 'acc50_4' 0.727288
constraint 'vel10_4'
lower bound constraint 'vel10_4' 3.49244
upper bound constraint 'vel10_4' 3.49244
constraint 'vel15_4'
lower bound constraint 'vel15_4' 3.19588
upper bound constraint 'vel15_4' 3.19588
constraint 'vel20_4'
lower bound constraint 'vel20_4' 2.71324
upper bound constraint 'vel20_4' 2.71324
constraint 'vel25_4'
lower bound constraint 'vel25_4' 1.9779
upper bound constraint 'vel25_4' 1.9779
constraint 'vel30_4'
lower bound constraint 'vel30_4' 0.973047
upper bound constraint 'vel30_4' 0.973047
constraint 'vel35_4'
lower bound constraint 'vel35_4' -0.169702
upper bound constraint 'vel35_4' -0.169702
constraint 'vel40_4'
lower bound constraint 'vel40_4' -1.21011
upper bound constraint 'vel40_4' -1.21011
constraint 'vel45_4'
lower bound constraint 'vel45_4' -1.97152
upper bound constraint 'vel45_4' -1.97152
constraint 'vel50_4'
lower bound constraint 'vel50_4' -2.44973
upper bound constraint 'vel50_4' -2.44973
constraint 'disp10_4'
lower bound constraint 'disp10_4' 0.880516
upper bound constraint 'disp10_4' 0.880516
constraint 'disp15_4'
lower bound constraint 'disp15_4' 0.712695
upper bound constraint 'disp15_4' 0.712695
constraint 'disp20_4'
lower bound constraint 'disp20_4' 0.564066
upper bound constraint 'disp20_4' 0.564066
constraint 'disp25_4'
lower bound constraint 'disp25_4' 0.445582
upper bound constraint 'disp25_4' 0.445582
constraint 'disp30_4'
lower bound constraint 'disp30_4' 0.370841
upper bound constraint 'disp30_4' 0.370841
constraint 'disp35_4'
lower bound constraint 'disp35_4' 0.350629
upper bound constraint 'disp35_4' 0.350629
constraint 'disp40_4'
lower bound constraint 'disp40_4' 0.386093
upper bound constraint 'disp40_4' 0.386093
constraint 'disp45_4'
lower bound constraint 'disp45_4' 0.466906
upper bound constraint 'disp45_4' 0.466906
constraint 'disp50_4'
lower bound constraint 'disp50_4' 0.578468
upper bound constraint 'disp50_4' 0.578468
constraint 'acc10_5'
lower bound constraint 'acc10_5' 0.723756
upper bound constraint 'acc10_5' 0.723756
constraint 'acc15_5'
lower bound constraint 'acc15_5' 0.977004
upper bound constraint 'acc15_5' 0.977004
constraint 'acc20_5'
lower bound constraint 'acc20_5' 1.68931
upper bound constraint 'acc20_5' 1.68931
constraint 'acc25_5'
lower bound constraint 'acc25_5' 2.64071
upper bound constraint 'acc25_5' 2.64071
constraint 'acc30_5'
lower bound constraint 'acc30_5' 3.11405
upper bound constraint 'acc30_5' 3.11405
constraint 'acc35_5'
lower bound constraint 'acc35_5' 2.4428
upper bound constraint 'acc35_5' 2.4428
constraint 'acc40_5'
lower bound constraint 'acc40_5' 1.30942
upper bound constraint 'acc40_5' 1.30942
constraint 'disp50_5'
lower bound constraint 'disp50_5' 0.456215
upper bound constraint 'disp50_5' 0.456215
move
constraint 'C1'
strict
lower bound constraint 'C1' 0
constraint 'C2'
lower bound constraint 'C2' 0
constraint 'C3'
lower bound constraint 'C3' 0
constraint 'C4'
lower bound constraint 'C4' 0
$
$ EXPERIMENTAL DESIGN
$
Order linear
Experimental design dopt
Basis experiment 3toK
Number experiment 10
$
$ JOB INFO
$
concurrent jobs 10
iterate param design 0.001
iterate param objective 0.001
iterate 20
STOP
The residual is retained in the formulation as the objective although it is not used in the optimization
process. This is so that the residual can be monitored, and used for comparison with the LSR formulation
results.
17.5.5 Results
The result of the optimization is shown in Figure 17-27. Shown are the leakage curves for a 5 and 10-
variable model. It can be seen that the introduction of more points on the leakage curve allows more
resolution in the low-pressure range of the material model.
As an example, the matching between the experimental and simulated acceleration, velocity and
displacement is depicted for both 4 and 5m/s chest form velocities in Figure 17-28 for the 10-variable LSR
Formulation case. It can be seen that the displacement and velocity curves are closely matched by the
optimum curve parameters, and that the discrepancy as exhibited by the residual is mainly due to the
acceleration curve not being matched exactly.
The optimization history of the objective (residual) and of one of the variables are shown in Figure 17-29
and Figure 17-30, respectively. In Figure 17-29, it can be seen that the case with less design variables
converges more rapidly, while the activation of first the panning, and secondly the zooming heuristic in LS-
OPT is clearly evident in the optimization history of the variable Leakage_5 (Figure 17-30).
• LS-DYNA is used for both explicit crash and implicit NVH simulations.
• Variable screening is performed.
• Multidisciplinary design optimization (MDO) is illustrated with a simple example.
• Extraction is performed using standard LS-DYNA interfaces.
To illustrate a relatively simple example of multidisciplinary design optimization (MDO), the small car of
Section 17.2 is extended to have five variables. In addition, one Noise, Vibration and Harshness (NVH)
parameter is considered as a constraint, i.e. the first torsional vibrational mode frequency.
Figure 17-31 shows the modified small car. Rails are added, and the combined bumper-hood section is
separated into a grill, hood and bumper. The mass of the affected components in the initial design is 1.328
units while the torsional mode frequency is 2.234Hz. This corresponds to mode number 14. The Head Injury
Criterion (HIC) based on a 15ms interval is initially 17500. The initial intrusion of the bumper is 531.9mm.
Hood thickness
Grill thickness
Roof thickness
Bumper
thickness
Rail_front
thickness
Rail_back
thickness
Figure 17-31: Small car with crash rails – definition of design variables
Minimize Mass(xcrash)
t_rail_front t_rail_front
t_roof t_roof
t_bumper t_bumper
t_hood t_hood
t_rail_back t_rail_back
t_rail_front t_rail_front
t_roof t_roof
t_bumper t_bumper
t_hood t_hood
t_rail_back t_rail_back
The LS-OPT input file below illustrates how the variables were reduced for the NVH solver. Note how the
two solvers, i.e., crash and NVH, are specified. Variables are flagged as local with the Local
variable_name statement, and then linked to a solver using the Solver variable
variable_name command.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$ SOLVER "CRASH"
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$
$ DEFINITION OF SOLVER "CRASH"
$
solver dyna960 'CRASH'
solver command "lsdyna.single"
solver input file "car6_crash.k"
solver order linear
solver experiment design dopt
solver basis experiment 3toK
solver concurrent jobs 1
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$ SOLVER "NVH"
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$
$ DEFINITION OF SOLVER "NVH"
$
solver dyna960 'NVH'
solver command "lsdyna.double"
solver input file "car6_NVH.k"
solver order linear
solver experiment design dopt
solver basis experiment 5toK
solver concurrent jobs 1
$
$ RESPONSES FOR SOLVER "NVH"
$
response 'Frequency' 1 0 "DynaFreq 15 FREQ"
response 'Mode' 1 0 "DynaFreq 15 NUMBER"
response 'Generalized_Mass' 1 0 "DynaFreq 15 GENMASS"
composites 4
$
$ COMPOSITE RESPONSES
$
composite 'Intrusion' type weighted
composite 'Intrusion' response 'Intru_2' -1 scale 1
composite 'Intrusion' response 'Intru_1' 1 scale 1
composite 'HIC_scaled' type targeted
composite 'HIC_scaled' response 'HIC' 0 scale 900
weight 1
composite 'Freq_scaled' type targeted
composite 'Freq_scaled' response 'Frequency' 0 scale 3
weight 1
$
$ COMPOSITE EXPRESSIONS
$
composite 'Intrusion_scaled' {Intrusion/500}
$
$ OBJECTIVE FUNCTIONS
$
objectives 1
objective 'Mass' 1
$
$ CONSTRAINT DEFINITIONS
$
constraints 3
constraint 'HIC_scaled'
upper bound constraint 'HIC_scaled' 1
constraint 'Freq_scaled'
lower bound constraint 'Freq_scaled' 1
constraint 'Intrusion_scaled'
upper bound constraint 'Intrusion_scaled' 1
$
$ JOB INFO
$
iterate param design 0.01
iterate param objective 0.01
iterate param stoppingtype and
iterate 10
STOP
The small car crash design problem was also optimized using the Sequential Random Search14 procedure
described in Section 2.13. Because of the multidisciplinary nature of the problem, all the variables were
selected to be fully shared. The design points for the NVH discipline were forced to be coincident with the
points of the crash discipline (see Section 9.6). The number of simulations per iteration is 16. The input file
is therefore as follows:
14
Available in Version 2.1
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$ SOLVER "NVH"
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$
$ DEFINITION OF SOLVER "NVH"
$
solver dyna960 'NVH'
solver command "ls970.double"
solver input file "car6_NVH.k"
solver concurrent jobs 1
solver experiment duplicate 'CRASH'
$
$ RESPONSES FOR SOLVER "NVH"
$
response 'Frequency' 1 0 "DynaFreq 15 FREQ"
response 'Mode' 1 0 "DynaFreq 15 NUMBER"
response 'Generalized_Mass' 1 0 "DynaFreq 15 GENMASS"
composites 4
$
$ COMPOSITE RESPONSES
$
composite 'Intrusion' type weighted
composite 'Intrusion' response 'Intru_2' -1 scale 1
composite 'Intrusion' response 'Intru_1' 1 scale 1
composite 'HIC_scaled' type targeted
composite 'HIC_scaled' response 'HIC' 0 scale 900
weight 1
composite 'Freq_scaled' type targeted
composite 'Freq_scaled' response 'Frequency' 0 scale 3
weight 1
$
$ COMPOSITE EXPRESSIONS
$
composite 'Intrusion_scaled' {Intrusion/500}
$
$ OBJECTIVE FUNCTIONS
$
objectives 1
objective 'Mass' 1
$
$ CONSTRAINT DEFINITIONS
The results are presented in Figure 17-34 (a) to (f). The random search method converges after
approximately 105 crash + 105 NVH simulations (7 iterations) while the response surface approach requires
about 60 crash + 30 NVH simulations (6 iterations). These numbers might vary because of the random
nature of the methods involved. Note the effects of mode tracking in Figure 17-34 (e).
Figure 17-34: Optimization histories (Small car MDO) – Latin Hypercube Sampling (SRS)
• LS-DYNA is used for both explicit full frontal crash and implicit NVH simulations.
• Multidisciplinary design optimization (MDO) is illustrated with a realistic full vehicle example.
• Extraction is performed using standard LS-DYNA interfaces.
This example illustrates a realistic application of Multidisciplinary Design Optimization (MDO) and
concerns the coupling of the crash performance of a large vehicle with one of its Noise Vibration and
Harshness (NVH) criteria, namely the torsional mode frequency [14]. The MDO formulation used is
depicted in Figure 17-35.
Design variables
Multidisciplinary Analyses
Crashworthiness
analysis System-level Optimizer
Goal: Minimize Mass
s.t. Crashworthiness and
NVH constraints
NVH analysis
State variables
17.7.1 Modeling
The crashworthiness simulation considers a model containing approximately 30 000 elements of a National
Highway Transportation and Safety Association (NHTSA) vehicle [44] undergoing a full frontal impact. A
modal analysis is performed on a so-called ‘body-in-white’ model containing approximately 18 000
elements. The crash model for the full vehicle is shown in Figure 17-36 for the undeformed and deformed
(time = 78ms) states, and with only the structural components affected by the design variables, both in the
undeformed and deformed (time = 72ms) states, in Figure 17-37. The NVH model is depicted in Figure
17-38 in the first torsion vibrational mode. Only body parts that are crucial to the vibrational mode shapes
are retained in this model. The design variables are all thicknesses or gages of structural components in the
engine compartment of the vehicle (Figure 17-37), parameterized directly in the LS-DYNA input file.
Twelve parts are affected, comprising aprons, rails, shotguns, cradle rails and the cradle cross member
(Figure 17-37). LS-DYNA v.960 is used for both the crash and NVH simulations, in explicit and implicit
modes respectively.
(a) (b)
Figure 17-36: Crash model of vehicle showing road and wall
(a) Undeformed (b) Deformed (78ms)
Inner and
outer rail Front cradle upper and
lower cross members
(a) (b)
To illustrate the effect of coupling between the disciplines, both full and partial sharing of the design
variables are considered. In addition, different starting designs are considered in a limited investigation of
the global optimality of the design.
The optimization problem for the different starting designs considered is defined as follows:
Minimize Mass
subject to
Maximum intrusion(xcrash) > 551.8mm
(Fully-shared variables)
Maximum intrusion(xcrash) = 551.8mm
(Partially-shared variables)
Fully-shared variables:
xcrash = xNVH = [rail_inner, rail_outer, cradle_rails, aprons, shotgun_inner, shotgun_outer,
cradle_crossmember]T.
The different variables set above were obtained by using ANOVA variable screening (Section 13.4). The
Mass objective in each case incorporates all the components defined in Figure 17-37. The allowable
torsional mode frequency band is reduced to 1Hz for the partially-shared cases to provide an optimum
design that is more similar to the baseline.
The three stage pulses are calculated from the SAE filtered (60Hz) acceleration and displacement of a left
rear sill node in the following fashion:
with the limits (d1;d2) = (0;184); (184;334); (334;Max(displacement)) for i = 1,2,3 respectively, all
displacement units in mm and the minus sign to convert acceleration to deceleration. The Stage 1 pulse is
represented by a triangle with the peak value being the value used.
The constraints are scaled using the target values to balance the violations of the different constraints. This
scaling is only important in cases where multiple constraints are violated as in the current problem.
The LS-OPT input file is given below. Note how the two disciplines (crash and NVH) are treated separately.
Variables are flagged as local with the Local variable_name statement, and then linked to a solver
using the Solver variable variable_name command.
$
response 'Vehicle_Mass_crash' 2204.62 0 "DynaMass 29 30 32 33 34 35 79 81 82 83 MASS"
response 'Vehicle_Mass_crash' linear
response 'Disp' 1 0 "DynaASCII Nodout X_DISP 26730 MAX"
response 'Disp' linear
response 'time_to_184' expression {Lookup("XDISP(t)",184)}
response 'time_to_334' expression {Lookup("XDISP(t)",334)}
response 'time_to_max' expression {LookupMax("XDISP(t)")}
response 'Integral_0_184' expression {Integral("XACCEL(t)",0,time_to_184,"XDISP(t)")}
response 'Integral_184_334' expression
{Integral("XACCEL(t)",time_to_184,time_to_334,"XDISP(t)")}
response 'Integral_334_max' expression
{Integral("XACCEL(t)",time_to_334,time_to_max,"XDISP(t)")}
response 'Stage1Pulse' expression {(Integral_0_184/(-9810))*2/184}
response 'Stage2Pulse' expression {(Integral_184_334/(-9810))/(334-184)}
response 'Stage3Pulse' expression {(Integral_334_max/(-9810))/(Disp-334)}
$
$ HISTORIES AND RESPONSES DEFINED BY EXPRESSIONS FOR SOLVER "CRASH"
$
composite 'Disp_scaled' type targeted
composite 'Disp_scaled' response 'Disp' 0 scale 551.81
weight 1
composite 'Stage1Pulse_scaled' {Stage1Pulse/14.34}
composite 'Stage2Pulse_scaled' {Stage2Pulse/17.57}
composite 'Stage3Pulse_scaled' {Stage3Pulse/20.76}
$
$ SOLVER SPECIFIC JOB INFO FOR SOLVER "CRASH"
$
solver concurrent jobs 4
$
$ DEFINITION OF SOLVER "NVH"
$
solver dyna 'NVH'
$
$ VARIABLES FOR SOLVER "NVH"
$
$
$ EXPERIMENTAL DESIGN OF SOLVER "NVH"
$
Solver Order linear
Solver Experimental design dopt
Solver Basis experiment 3toK
Solver Number experiment 8
$
$ SOLVER AND PREPROCESSOR COMMANDS OF SOLVER "NVH"
$
solver command "lsdyna.double"
solver input file "dyna_biw.input"
$
$ RESPONSES FOR SOLVER "NVH"
$
response 'Frequency' 1 0 "DynaFreq 1 FREQ"
response 'Frequency' linear
$
$ NO HISTORIES DEFINED FOR SOLVER "NVH"
$
$
The deceleration versus displacement curves of the baseline crash model and Iteration 6 design are shown in
Figure 17-39 for the partially-shared variable case. The stage pulses as calculated by Equation 4 are also
shown, with the optimum values only differing slightly from the baseline. The reduction in displacement at
the end of the curve shows that there is spring-back or rebound at the end of the simulation.
The bounds on the design variables are given in Table 17-4 together with the different initial designs or
starting locations used. Starting design 1 corresponds to the baseline model as shown in Figure 17-36
through Figure 17-38, while the other two designs correspond to the opposite corners of the design space
hypercube, i.e. the lightest design and heaviest design possible with the design variables used.
35
accel_vs_displacement: baseline
accel_vs_displacement: Iteration 6
30 StagePulses: baseline
StagePulses: Iteration 6
25
Deceleration [g]
20
15
10
0
0 100 200 300 400 500 600
Displacement [mm]
Figure 17-39: Deceleration (Filtered: SAE 60Hz) versus displacement of baseline and Iteration 6 design –
(Partially-shared variables): Starting design 1
Table 17-4: Bounds on design variables and starting designs for optimization
Rail_ Rail_ Cradle Aprons Shotgun Shotgun Cradle cross
inner outer rail [mm] inner outer member
mm] [mm] [mm] [mm] [mm] [mm]
Lower bound 1 1 1 1 1 1 1
Upper bound 3 3 2.5 2.5 3 3 2.5
Starting
design 1 2 1.5 1.93 1.3 1.3 1.3 1.93
(Baseline)
Starting
design 2
1 1 1 1 1 1 1
(Minimum
weight)
Starting
design 3
3 3 2.5 2.5 3 3 2.5
(Maximum
weight)
(base)
Starting
design 2 8 43.2 552.5 14.66 17.56 20.69 38.15
(min)
Starting
design 3 6 43.8 553.7 14.46 17.48 20.61 39.07
(max)
Beginning with starting design 1, the optimization history of the objective and constraints are shown for the
full and partially-shared variable cases in Figure 17-40 through Figure 17-43. Most of the reduction in mass
occurs in the first iteration (Figure 17-40), although this results in a significant violation of the maximum
displacement and second stage pulse constraints, especially in the fully-shared variable case. The second
iteration corrects this, and from here the optimizer tries to reconcile four constraints that are marginally
active. Most of the intermediate constraint violations (see e.g. Figure 17-41) can be ascribed to the
difference between the value predicted by the response surface and the value computed by the simulation.
The torsional frequency remains within the bounds set during the optimization for the full-shared case.
44.5
Fully shared
44 Partially shared
Mass [kg]
43.5
43
42.5
42
0 1 2 3 4 5 6 7 8 9
Iteration
554
552
550
22
21
Stage1Pulse (Full)
20 Stage2Pulse (Full)
Acceleration [g]
Stage3Pulse (Full)
19
Stage1Pulse (Partial)
18 Stage2Pulse (Partial)
17 Stage3Pulse (Partial)
Lower bound: Stage 1
16
Lower bound: Stage 2
15 Lower bound: Stage 3
14
0 1 2 3 4 5 6 7 8 9 10
Iteration
Frequency (Full)
39.5
Frequency (Partial)
Frequency [Hz]
37.5
0 1 2 3 4 5 6 7 8 9 10
Iteration
The results of the partially-shared variable case for starting design 1 (Figure 17-42) can be seen to be
superior to the fully-shared case. The reason for this is that all the disciplinary responses are now sensitive
to their respective variables, allowing faster convergence. Interestingly, most of the mass reduction in this
case occurs in the cradle cross member, a variable that is only included in the NVH simulation. The
variation of the remaining variables is however enough to meet all the crash constraints. The reduction in
the allowable frequency band made the NVH performance more interesting in Phase 1 than Phase 2. It can
be seen in Figure 17-43 that the lower bound becomes active during the optimization process, but that the
optimizer then pulls the torsional mode frequency within the prescribed range. The final design iteration
considered (iteration 9) was repeated (see point 10 in Figure 17-41 through Figure 17-43) with the variables
rounded to the nearest 0.1mm due to the 0.1mm manufacturing tolerance typically used in the stamping of
automotive parts. It is shown that the design is lighter by 4.75% from the baseline, but at the cost of a 2.4%
violation in the Stage 2 pulse. The other constraints are satisfied.
A summary result for the heaviest and lightest starting designs (2 and 3) is given in Figure 17-44 for the
objective function. In both cases, an ANOVA was performed after one iteration of full sharing only, in order
to reduce the number of discipline-specific variables using variable screening. The optimization was then
restarted using the variable sets as defined above. As expected, both designs converge to an intermediate
mass in an attempt to satisfy all the constraints. The heaviest design history exhibits the largest mass change
because of the significant increase in the thickness of the components over the baseline design. The initial
allowable range or move limit on the design variables was doubled in the heaviest design case, to investigate
the effect of the initial subregion size on the convergence rate. It can be seen that this resulted in the
objective being minimized in relatively few iterations.
75
Max Design
Mass [kg] 65
Min Design
55
45
35
25
0 1 2 3 4 5 6 7 8 9 10
Iteration
Figure 17-44: Optimization history of component mass (Objective) – Starting designs 2 and 3
(base)
Starting
design 2 2.04 1.884 1.507 1.441 1.11 1.372 1.161
(lightest)
Starting
design 3 1.95 1.765 1.469 1.303 2.123 1.391 1.208
(heaviest)
The optimum designs obtained in each case above are compared in Table 17-5 for the objective function and
constraints, and in Table 17-6 for the design variables. Note how the partially shared variable Starting
design 1 case has the lowest mass while performing the best as far as the constraints are concerned. The
extreme starting designs gave interesting results. After rapidly improving from the initial violations, they
both converged to local minima. The maximum design (Starting design 3) started the furthest away from the
optimum design, but converged rapidly due to the increased initial move limit. This highlights the need for a
Comparing the fully- and partially-shared variable cases for starting design 1, it can be seen that the
optimization process converged in 9 iterations in the first case, while in the latter, a good compromised
design was found in only 6 iterations. Coupled to the reduction in design variables, especially for the NVH
simulations, the reduction in the number of simulations as shown in Table 17-7 is the result. To explain the
number of simulations, clarification of the experimental design used is in order. A 50% over-sampled
D-optimal experimental design is used, whereby the number of experimental points for a linear
approximation is determined from the formula: 1.5(n + 1) + 1, where n refers to the number of design
variables. Consequently, for the full sharing, 7 variables imply 13 experimental design points, while for the
partial sharing, 6 variables for crash imply 11 design points, and 4 variables for NVH imply 8 points (see
Chapter 8). The NVH simulations, although not time-consuming due to their implicit formulation, involve a
large use of memory due to double-precision matrix operations. Crashworthiness simulations, on the other
hand, require little memory because of single-precision vector operations, but are time-consuming due to
their explicit nature. It is therefore preferable to assign as many processors as possible to the
crashworthiness simulations, while limiting the number of simultaneous NVH simulations to the available
computer memory to prevent swapping.
Table 17-7: Number of simulations for Fully- and Partially-Shared Variable Cases (Starting design 1)
Case Number of crash Number of NVH simulations
simulations for for ‘convergence’
‘convergence’
Fully-shared variables 9 x 13 = 127 9 x 13 = 127
Partially-shared variables 6 x 11 = 66 6 x 8 = 48
• Variable screening is illustrated for a knee impact minimization study for a problem with both thickness
and shape variables.
• The use of ANOVA for variable screening is shown.
• LS-DYNA is used for the explicit impact simulation.
• An independent parametric preprocessor is used.
• Extraction is performed using standard LS-DYNA interfaces.
• The minimum of two maxima is obtained in the objective (multi-criteria or multi-objective problem).
Figure 17-45 shows the finite element model of a typical automotive instrument panel (IP) [2]. For model
simplification and reduced per-iteration computational times, only the driver's side of the IP is used in the
analysis, and consists of around 25 000 shell elements. Symmetry boundary conditions are assumed at the
centerline, and to simulate a bench component "Bendix" test, body attachments are assumed fixed in all 6
directions. Also shown in Figure 17-45 are simplified knee forms which move in a direction as determined
from prior physical tests. As shown in the figure, this system is composed of a knee bolster (steel, plastic or
both) that also serves as a steering column cover with a styled surface, and two energy absorption (EA)
brackets (usually steel) attached to the cross vehicle IP structure. The brackets absorb a significant portion
of the lower torso energy of the occupant by deforming appropriately. Sometimes, a steering column
isolator (also known as a yoke) may be used as part of the knee bolster system to delay the wrap-around of
the knees around the steering column. The last three components are non-visible and hence their shape can
be optimized. The 11 design variables are shown in Figure 17-46. The three gauges and the yoke cross-
sectional radius are also considered in a separate sizing (4 variable) optimization.
Styled surface,
Simplified non-optimizable
knee forms
Figure 17-45: Typical instrument panel prepared for a "Bendix" component test
Right EA
Width
Right Bracket Gauge
Left EA Inner
Flange Width
Figure 17-46: Typical major components of a knee bolster system and definition of design variables
The simulation is carried out for a 40 ms duration by which time the knees have been brought to rest. It
may be mentioned here that the Bendix component test is used mainly for knee bolster system development;
for certification purposes, a different physical test representative of the full vehicle is performed. Since the
simulation used herein is at a subsystem level, the results reported here may be used mainly for illustration
purposes.
Minimization over both knee forces is achieved by constraining them to impossibly low values. The
optimization algorithm will therefore always try to minimize the maximum knee force. The knee forces
have been filtered, SAE 60 Hz, to improve the approximation accuracy.
17.8.3 Implementation
Truegrid is used to parameterize the geometry. The section of the Truegrid input file (s7.tg) where the
design variables are substituted, is shown below.
para
w1 <<L_Flange_Width>> c Left EA flange width
w2 <<R_Flange_Width>> c Right EA flange width
thick1 <<L_Bracket_Gauge>> c Left bracket gauge
thick2 <<R_Bracket_Gauge>> c Right bracket gauge
thick3 <<Bolster_gauge>> c Knee bolster gauge
f1 <<T_Flange_Depth>> c Left EA Depth Top
f2 <<F_Flange_Depth>> c Left EA Depth Front
f3 <<B_Flange_Depth>> c Left EA Depth Bottom
f4 <<I_Flange_Width>> c Left EA Inner Flange Width
r1 <<Yolk_Radius>> c Yolk bar radius
r2 <<R_Bracket_Radius>> c Oblong hole radius
The LS-OPT input file is shown below for the 11-variable shape optimization case:
$
$ DESIGN FUNCTIONS FOR SOLVER "1"
$
response 'L_Knee_Force' 0.000153846 0 "DynaASCII rcforc R_FORCE 1 MAX SAE 60.0"
response 'R_Knee_Force' 0.000153846 0 "DynaASCII rcforc R_FORCE 2 MAX SAE 60.0"
response 'L_Knee_Disp' 0.00869565 0 "DynaASCII Nodout R_DISP 24897 MAX"
response 'R_Knee_Disp' 0.00869565 0 "DynaASCII Nodout R_DISP 25337 MAX"
response 'Yoke_Disp' 0.0117647 0 "DynaASCII Nodout R_DISP 28816 MAX"
response 'Kinetic_Energy' 6.49351e-06 0 "DynaASCII glstat K_ENER 0 TIMESTEP"
response 'Mass' 638.162 0 "DynaMass 7 8 48 62 MASS"
$
$ (DUMMY) OBJECTIVE FUNCTION
$
objectives 1
objective 'Mass' response 'Mass' 1
$
$ CONSTRAINT DEFINITIONS
$
constraints 6
constraint 'L_Knee_Force' response 'L_Knee_Force'
upper bound constraint 'L_Knee_Force' 0.5
constraint 'R_Knee_Force' response 'R_Knee_Force'
upper bound constraint 'R_Knee_Force' 0.5
constraint 'L_Knee_Disp' response 'L_Knee_Disp'
strict
upper bound constraint 'L_Knee_Disp' 1
constraint 'R_Knee_Disp' response 'R_Knee_Disp'
upper bound constraint 'R_Knee_Disp' 1
constraint 'Yoke_Disp' response 'Yoke_Disp'
upper bound constraint 'Yoke_Disp' 1
constraint 'Kinetic_Energy' response 'Kinetic_Energy'
upper bound constraint 'Kinetic_Energy' 1
$
$ EXPERIMENTAL DESIGN
$
Order linear
Experimental design dopt
Basis experiment 3toK
Number experiment 19
$
$ JOB INFO
$
concurrent jobs 5
iterate param design 0.01
iterate param objective 0.01
iterate 5
STOP
S u m m a r y o f s i g n i f i c a n c e o f v a r i a b l e s
------------------------------------------------------------------------
------------------------------------------------------------------------
L_Knee_Force R_Knee_Force L_Knee_Disp
L_Knee_Disp R_Knee_Disp
R_Knee_Disp
----------------|
----------------|--------------|
--------------|--------------|
--------------|--------------|
--------------|--------------|
--------------|
L_Bracket_Gauge |========== | |====== |==========
|========== |
T_Flange_Depth | | |== | |
F_Flange_Depth | | |=== | |
B_Flange_Depth | | |==== | |
I_Flange_Width |= | |====== |= |
L_Flange_Width |=== | |= |= |
R_Bracket_Gauge | |========== |========= | |
R_Flange_Width |= | |===== | |
R_Bracket_Radius| | |= | |
Bolster_gauge |======= | |========== |=========
|========= |
Yolk_Radius |== |==== |===== |===
|=== |
----------------|
----------------|--------------|
--------------|--------------|
--------------|--------------|
--------------|--------------|
--------------|
More detailed results based on the 90 and 95% confidence intervals show e.g. that the left knee force is
mostly influenced by the left bracket gauge, bolster gauge and left flange width, as would be expected. If the
spread in the data (as denoted by the upper and lower limits of the respective confidence intervals) causes
the sensitivity (coefficient value) to change sign (as e.g. in the case of T_Flange_Depth in the table
below), then that variable’s contribution to the respective response is deemed insignificant.
This result is based on one iteration only. Reducing the number of variables from 11 to 7 reduces the
number of LS-DYNA simulations from 19 to 13 when using the default D-optimal design settings.
The optimization history of the knee forces using the original 11 variables and the reduced set (7 variables)
is shown in Figure 17-47.
10000
11 Variables
(Predicted)
9000
8000
11 Variables
Maximum Knee Force
7000 (Computed)
6000
7 Variables
5000 (Predicted)
4000
7 Variables
3000 (Computed)
2000
1000
0
0 1 2 3 4 5
Iteration Number
Figure 17-47: Comparison of optimization history of maximum knee force for full and reduced variable sets
The stiffness data for the floor is specified using a user defined distribution as follows:
2.0e3 0.0
4.0e3 0.3
6.0e3 0.4
8.0e3 0.3
10.0e3 0.0
The stiffness data for the roof is specified using a user defined distribution as follows:
The probability of the energy absorbed being less than 2200kJ is computed. The best-known answer of
0.048 was found using 1500 Monte Carlo runs.
The problem is analysed using a Monte-Carlo evaluation of 150 runs and a quadratic response surface build
using 35 runs giving probabilities of 0.06 and 0.053 respectively with the response surface result being more
accurate.
$ -------------------------------------------------------------
"Bus Frame Roll-over Example. Demonstrate Monte Carlo Analysis"
Author "LS-OPT Manual"
$ -------------------------------------------------------------
$
$
$ ---------------------- DISTRIBUTION --------------------------------
$
distributions 3
distribution 'Intru_Dist' NORMAL 0.0 0.010
distribution 'RoofHinge_Dist' USER_DEFINED_PDF "RoofHinge.dat"
distribution 'FloorHinge_Dist' USER_DEFINED_PDF "FloorHinge.dat"
$
$ ------------------------- VARIABLES --------------------------------
$
variables 5
$
$ NOISE VARS
$
Noise Variable 'FloorDriverHinge' Distribution 'FloorHinge_Dist'
Noise Variable 'FloorDoorHinge' Distribution 'FloorHinge_Dist'
Noise Variable 'RoofDriverHinge' Distribution 'RoofHinge_Dist'
Noise Variable 'RoofDoorHinge' Distribution 'RoofHinge_Dist'
$
$ CONTROL VARS
$
Variable 'Intru' 0.5
Variable 'Intru' Distribution 'Intru_Dist'
Lower Bound Variable 'Intru' 0.1
Upper Bound Variable 'Intru' 0.8
$
$ Dependents
$
dependents 4
dependent 'RoofDriverHingeMin' {-RoofDriverHinge}
dependent 'RoofDoorHingeMin' {-RoofDoorHinge}
dependent 'FloorDriverHingeMin' {-FloorDriverHinge}
###############################################################
Direct Monte Carlo simulation considering 5 stochastic variables.
###############################################################
#####################################################
STATISTICS OF VARIABLES
#####################################################
Variable 'FloorDriverHinge'
Distribution Information
------------------------
Number of points : 150
Mean Value : 6011
Standard Deviation : 584.6
Coef of Variation : 0.09726
Maximum Value : 7048
Minimum Value : 4958
Variable 'FloorDoorHinge'
Distribution Information
------------------------
Number of points : 150
Mean Value : 6115
Variable 'RoofDriverHinge'
Distribution Information
------------------------
Number of points : 150
Mean Value : 1182
Standard Deviation : 166.5
Coef of Variation : 0.1409
Maximum Value : 1491
Minimum Value : 905.5
Variable 'RoofDoorHinge'
Distribution Information
------------------------
Number of points : 150
Mean Value : 1187
Standard Deviation : 167.6
Coef of Variation : 0.1411
Maximum Value : 1500
Minimum Value : 910
Variable 'Intru'
Distribution Information
------------------------
Number of points : 150
Mean Value : 0.4994
Standard Deviation : 0.009914
Coef of Variation : 0.01985
Maximum Value : 0.5355
Minimum Value : 0.473
#####################################################
STATISTICS OF RESPONSES
#####################################################
Response 'Ener'
Distribution Information
------------------------
Number of points : 150
Mean Value : 2327
Standard Deviation : 80.09
Coef of Variation : 0.03442
Maximum Value : 2582
Minimum Value : 2136
#####################################################
STATISTICS OF COMPOSITES
#####################################################
#####################################################
STATISTICS OF CONSTRAINTS
#####################################################
Constraint 'Ener'
Distribution Information
------------------------
Number of points : 150
Mean Value : 2327
Standard Deviation : 80.09
Coef of Variation : 0.03442
Maximum Value : 2582
Minimum Value : 2136
Lower Bound:
------------
Bound .................................... 2200
Evaluations exceeding this bound ......... 9
Probability of exceeding bound ........... 0.06
Confidence Interval on Probability.
Standard Deviation of Prediction Error: 0.01939
Lower Bound | Reliability Value | Higher Bound
0.02122 | 0.06 | 0.09878
Confidence Interval of 95% assuming Normal Distribution
Confidence Interval of 75% using Tchebysheff's Theorem
ANALYSIS COMPLETED
The following are added to specify the subregion for the computation of the approximations.
The experimental design and job in the previous command file are changed to:
$
$ -------------------------- SAMPLING AND JOB -----------------------
$
Experimental design dopt
Basis experiment 3toK
Number experiment 35
analyze metamodel monte carlo
STOP
###############################################################
Monte Carlo simulation considering 5 stochastic variables.
Computed using 1000000 simulations
###############################################################
--------------------------------------------------------------
Results for reliability analysis using approximate functions
--------------------------------------------------------------
#####################################################
STATISTICS OF VARIABLES
#####################################################
Variable 'FloorDriverHinge'
Distribution Information
------------------------
Number of points : 1000000
Mean Value : 6000
Standard Deviation : 578.7
Coef of Variation : 0.09645
Maximum Value : 7050
Minimum Value : 4950
Variable 'FloorDoorHinge'
Distribution Information
------------------------
Number of points : 1000000
Mean Value : 6000
Standard Deviation : 578.9
Coef of Variation : 0.09649
Maximum Value : 7050
Minimum Value : 4950
Variable 'RoofDriverHinge'
Distribution Information
------------------------
Number of points : 1000000
Mean Value : 1200
Standard Deviation : 165.3
Coef of Variation : 0.1377
Maximum Value : 1500
Minimum Value : 900
Variable 'RoofDoorHinge'
Distribution Information
------------------------
Number of points : 1000000
Mean Value : 1200
Standard Deviation : 165.3
Coef of Variation : 0.1377
Maximum Value : 1500
Minimum Value : 900
Variable 'Intru'
Distribution Information
#####################################################
STATISTICS OF RESPONSES
#####################################################
Response 'Ener'
Distribution Information
------------------------
Number of points : 1000000
Mean Value : 2328
Standard Deviation : 79.14
Coef of Variation : 0.034
Maximum Value : 2674
Minimum Value : 1971
#####################################################
STATISTICS OF COMPOSITES
#####################################################
#####################################################
STATISTICS OF CONSTRAINTS
#####################################################
Constraint 'Ener'
Distribution Information
------------------------
Number of points : 1000000
Mean Value : 2328
Standard Deviation : 79.14
Coef of Variation : 0.034
Maximum Value : 2674
Minimum Value : 1971
Lower Bound:
------------
Bound .................................... 2200
Evaluations exceeding this bound ......... 53014
Probability of exceeding bound ........... 0.05301
Confidence Interval on Probability.
Standard Deviation of Prediction Error: 0.0002241
Lower Bound | Reliability Value | Higher Bound
0.05257 | 0.05301 | 0.05346
Confidence Interval of 95% assuming Normal Distribution
ANALYSIS COMPLETED
[1] Akaike, H. Statistical predictor identification. Ann.Inst.Statist.Math., 22, pp. 203-217, 1970.
[2] Akkerman, A., Thyagarajan, R., Stander, N., Burger, M., Kuhn, R., Rajic, H. Shape optimization for
crashworthiness design using response surfaces. Proceedings of the 1st International Workshop on
Multidisciplinary Design Optimization, Pretoria, South Africa, 8-10 August 2000, pp. 270-279.
[3] Arora, J.S. Introduction to Optimum Design. 1st ed. McGraw-Hill, 1989.
[4] Arora, J.S. Sequential linearization and quadratic programming techniques. In Structural
Optimization: Status and Promise, Ed. Kamat, M.P., AIAA, 1993.
[5] Bakker, T.M.D. Design Optimization with Kriging Models. WBBM Report Series 47, Ph.D. thesis,
Delft University Press, 2000.
[6] Barthelemy, J.-F. M. Function Approximation. In Structural Optimization: Status and Promise, Ed.
Kamat, M.P., 1993.
[7] Basu, A., Frazer, L.N. Rapid determination of the critical temperature in simulated annealing
inversion, Science, 249, pp. 1409-1412, 1990.
[8] Bishop, C.M. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
[9] Bounds, D. G., New optimization methods from physics and biology, Nature, 329, pp. 215-218, 1987.
[10] Box, G.E.P., Draper, N.R. A basis for the selection of a response surface design. Journal of the
American Statistical Association, 54, pp. 622-654, 1959.
[11] Box., G.E.P., Draper, N.R. Empirical Model Building and Response Surfaces, Wiley, New York,
1987.
[12] Burgee, S., Giunta, A. A., Narducci, R., Watson, L. T., Grossman, B. and Haftka, R. T. A coarse
grained parallel variable-complexity multidisciplinary optimization paradigm, The International
Journal of Supercomputer Applications and High Performance Computing, 10(4), pp. 269-299, 1996.
[13] Cohn, D. Neural network exploration using optimal experiment design, Neural Networks, (9)6, pp.
1071-1083, 1996.
[14] Craig K.J., Stander, N., Dooge, D., Varadappa, S. MDO of automotive vehicle for crashworthiness
and NVH using response surface methods. Paper AIAA2002_5607, 9th AIAA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, 4-6 Sept 2002, Atlanta, GA.
[15 ] Daberkow, D.D. and Mavris, D.N. An investigation of metamodeling techniques for complex systems
design. Symposium on Multidisciplinary Analysis and Design, Atlanta, October 2002.
[16] Eschenauer, H., Koski, J., Osyczka, A. Multicriteria Design Optimization. Procedures and
Applications. Springer-Verlag, Berlin, 1990.
[17] Fedorova, N.N., Terekhoff, S.A. Space Filling Designs, Internal Report, April 2002.
[18] Foresee, F. D., Hagan, M. T. Gauss-Newton approximation to Bayesian regularization. Proceedings of
the 1997 International Joint Conference on Neural Networks, pp. 1930-1935, 1997.
[19] Forsberg, J. Simulation Based Crashworthiness Design – Accuracy Aspects of Structural optimization
using Response Surfaces. Thesis No. 954. Division of Solid Mechanics, Department of Mechanical
Engineering, Linköping University, Sweden, 2002.
[20] Giger, M., Redhe, M. and Nilsson, L, Division of Mechanics, Department of Mechanical Engineering,
Linköping University, Sweden. Personal Communication, January 2003.
289
BIBLIOGRAPHY
[61] Stander, N., Craig, K.J. On the robustness of a simple domain reduction scheme for simulation-based
optimization, Engineering Computations, 19(4), pp. 431-450, 2002.
[62] Stander, N.; Reichert, R.; Frank, T. 2000: Optimization of nonlinear dynamic problems using
successive linear approximations. AIAA Paper 2000-4798.
[63] Stander, N., Roux, W.J., Giger, M., Redhe, M., Fedorova, N. and Haarhoff, J. Crashworthiness
optimization in LS-OPT: Case studies in metamodeling and random search techniques. Proceedings of
the 4th European LS-DYNA Conference, Ulm, Germany, May 22-23, 2003. (Also www.lstc.com).
[64] Stander, N., Snyman, J.A., Coster, J.E., On the robustness and efficiency of the SAM algorithm for
structural optimization. International Journal for Numerical Methods in Engineering, 38, pp. 119-135,
1995.
[65] Sunar, M., Belegundu, A.D. Trust region methods for structural optimization using exact second order
sensitivity. International Journal for Numerical Methods in Engineering, 32, pp. 275-293, 1991.
[66] Thanedar, P.B., Arora, J.S., Tseng, C.H., Lim, O.K., Park, G.J. Performance of some SQP algorithms
on structural design problems. International Journal for Numerical Methods in Engineering, 23, pp.
2187-2203, 1986.
[67] Toropov, V.V. Simulation approach to structural optimization. Structural Optimization, 1, pp. 37-46,
1989.
[68] Tu, J. and Choi, K.K. Design potential concept for reliability-based design optimization. Technical
report R99-07. Center for computer aided design and department of mechanical engineering. College
of engineering. University of Iowa. December 1999.
[69] Van Campen, D.H., Nagtegaal R., Schoofs, A.J.G. Approximation methods in structural optimization
using experimental designs for multiple responses, In: Eschenauer, H.; Koski, J.; Osyczka, A. (Eds.)
Multicriteria Design Optimization - Procedures and Applications, Springer-Verlag: Berlin,
Heidelberg, New York, pp. 205-228, 1990.
[70] Vanderplaats, G.N. Numerical Optimization Techniques for Engineering Design: with Applications.
McGraw-Hill, New York, 1984.
[71] Wahba, G. Spline Models for Observational Data. Volume 59 of Regional Conference Series in
Applied Mathematics. SIAM Press, Philadelphia, 1990.
[72] Wall, L., Christiansen, T., Schwartz, R. Programming Perl, O’Reilly & Associates, Inc., Cambridge,
1991.
[73] White, H., Hornik, K., Stinchcombe, M. Universal approximation of an unknown mapping and its
derivatives. Artificial Neural Networks: Approximations and Learning Theory, H. White, ed., Oxford,
UK: Blackwell, 1992.
[74] Wilson, B., Cappelleri, D.J., Frecker, M.I. and Simpson, T.W. Efficient Pareto Frontier Exploration
using surrogate approximations. Optimization and Engineering, 2 (1), pp.31-50, 2001.
[75] Xu, Q-S., Liang, Y-Z., Fang, K-T., The effects of different experimental designs on parameter
estimation in the kinetics of a reversible chemical reaction. Chemometrics and Intelligent Laboratory
Systems, 52, pp. 155-166, 2000.
[76] Yamazaki, K., Han, J., Ishikawa, H., Kuroiwa, Y. Maximation of crushing energy absorption of
cylindrical shells – simulation and experiment, Proceedings of the OPTI-97 Conference, Rome, Italy,
September 1997.
[77] Ye, K., Li, W., Sudjianto, A., Algorithmic construction of optimal symmetric Latin Hypercube
designs, Journal of Statistical Planning and Inferences, 90, pp. 145-159, 2000.
[78] Zang, T.A., Green, L.L., Multidisciplinary Design Optimization techniques: Implications and
opportunities for fluid dynamics research, AIAA Paper 99-3798, 1999.
ABSTAT
Keyword Description
VOLUME Volume
PRESSURE Pressure
I_ENER Internal energy
IN_FLOW_RATE Input mass flow rate
OUT_FLOW_RATE Output mass flow rate
MASS Mass
TEMP Temperature
DENSITY Density
AREA Area
BNDOUT
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
DEFORC
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
R_FORCE Resultant force
293
APPENDIX A: LS-DYNA ASCII RESULT FILES AND COMPONENTS
ELOUT
Keyword Element Description
BXX_STRESS Brick XX-stress
BYY_STRESS YY-stress
BZZ_STRESS ZZ-stress
BXY_STRESS XY-stress
BYZ_STRESS YZ-stress
BZX_STRESS ZX-stress
YIELD Yield function
BE_STRESS Effective stress
BPRESSURE Pressure
BMAX_SHEAR Maximum shear stress
BMAX_P_STRESS Maximum principal stress
BMIN_P_STRESS Minimum principal stress
AXIAL Beam Axial force resultant
S_SHEAR s-Shear resultant
T_SHEAR t-Shear resultant
S_MOMENT s-Moment resultant
T_MOMENT t-Moment resultant
TORSION Torsional resultant
SIG_11 σ11
SIG_12 σ12
SIG_31 σ31
PLASTIC Plastic strain
ELOUT
Keyword Element Description
XX_STRESS Shell element XX-stress
YY_STRESS stress YY-stress
ZZ_STRESS ZZ-stress
XY_STRESS XY-stress
YZ_STRESS YZ-stress
ZX_STRESS ZX-stress
P_STRAIN Plastic strain
PRESSURE Pressure
E_STRESS Effective stress
MAX_SHEAR Maximum shear stress
MAX_P_STRESS Maximum principal stress
MIN_P_STRESS Minimum principal stress
XX_STRAIN Shell element XX-strain
YY_STRAIN strain YY-strain
ZZ_STRAIN ZZ-strain
XY_STRAIN XY-strain
YZ_STRAIN YZ-strain
ZX_STRAIN ZX-strain
E_STRAIN Effective strain
MAX_S_STRAIN Maximum shear strain
MAX_P_STRAIN Maximum principal strain
MIN_P_STRAIN Minimum principal strain
TXX_STRESS Thick shell XX-stress
TYY_STRESS stress YY-stress
TZZ_STRESS ZZ-stress
TXY_STRESS XY-stress
TYZ_STRESS YZ-stress
TZX_STRESS ZX-stress
TP_STRAIN Plastic strain
TPRESSURE Pressure
TE_STRESS Effective stress
TMAX_SHEAR Maximum shear stress
TMAX_P_STRESS Maximum principal stress
TMIN_P_STRESS Minimum principal stress
GCEOUT
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
R_FORCE Force magnitude
X_MOMENT X-moment
Y_MOMENT Y-moment
Z_MOMENT Z-moment
R_MOMENT Moment magnitude
Global Statistics
GLSTAT
Keyword Description
K_ENER Kinetic energy
I_ENER Internal energy
T_ENER Total energy
RATIO Ratio
SW_ENER Stonewall energy
D_ENER Spring & Damper energy
HG_ENER hourglass energy
SI_ENER sliding interface energy
EW_ENER external work
X_VEL global x-velocity
Y_VEL global y-velocity
Z_VEL global z-velocity
T_VEL velocity
JNTFORC
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
X_MOMENT X-moment
Y_MOMENT Y-moment
Z_MOMENT Z-moment
R_FORCE R-force
R_MOMENT R-moment
Material Summary
MATSUM
Keyword Description
K_ENER Kinetic energy
I_ENER Internal energy
X_MOMENTUM X-momentum
Y_MOMENTUM Y-momentum
Z_MOMENTUM Z-momentum
MOMENTUM Momentum
XRB_VEL X-rigid body velocity
YRB_VEL Y-rigid body velocity
ZRB_VEL Z-rigid body velocity
RB_VEL Rigid body velocity
TK_ENER Total kinetic energy
TI_ENER Total internal energy
NCFORC
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
R_FORCE R-force
PRESSURE Pressure
NODOUT
Keyword Description
X_DISP X-displacement
Y_DISP Y-displacement
Z_DISP Z-displacement
R_DISP Resultant displacement
X_VEL X-velocity
Y_VEL Y-velocity
Z_VEL Z-velocity
R_VEL Resultant velocity
X_ACC X-acceleration
Y_ACC Y-acceleration
Z_ACC Z-acceleration
R_ACC R-acceleration
Rotational components
RX_DISP XX-rotation
RY_DISP YY-rotation
RZ_DISP ZZ-rotation
RX_VEL XX-rotational velocity
RY_VEL YY-rotational velocity
RZ_VEL ZZ-rotational velocity
RX_ACC XX-rotational acceleration
RY_ACC YY-rotational acceleration
RZ_ACC ZZ-rotational acceleration
Injury coefficients
CSI Chest Severity Index
HIC15 Head Injury Coefficient (15 ms)
HIC36 Head Injury Coefficient (36 ms)
NODFOR
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
R_FORCE Resultant force
X_TOTAL X-total force
Y_TOTAL Y-total force
Z_TOTAL Z-total force
R_TOTAL Total resultant force
Rigid Body Data
RBDOUT
Keyword Description
X_DISP X-displacement
Y_DISP Y-displacement
Z_DISP Z-displacement
R_DISP R-displacement
X_VEL X-velocity
Y_VEL Y-velocity
Z_VEL Z-velocity
R_VEL Resultant velocity
X_ACC X-acceleration
Y_ACC Y-acceleration
Z_ACC Z-acceleration
R_ACC R-acceleration
Rotational components
RX_DISP X-rotation
RY_DISP Y-rotation
RZ_DISP Z-rotation
RX_VEL X-velocity
RY_VEL Y-velocity
RZ_VEL Z-velocity
RX_ACC X-acceleration
RY_ACC Y-acceleration
RZ_ACC Z-acceleration
Injury coefficients
CSI Chest Severity Index
HIC15 Head Injury Coefficient (15 ms)
HIC36 Head Injury Coefficient (36 ms)
Reaction Forces
RCFORC
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
R_FORCE R-force
XS FORCE X-slave force
YS FORCE Y-slave force
ZS FORCE Z-slave force
RS FORCE R-slave force
RigidWall Forces
RWFORC
Keyword Description
NORMAL normal
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
Section Forces
SECFORC
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
X_MOMENT X-moment
Y_MOMENT Y-moment
Z_MOMENT Z-moment
X_CENTER X-center
Y_CENTER Y-center
Z_CENTER Z-center
R_FORCE R-force
R_MOMENT R-moment
SPCFORC
Keyword Description
X_FORCE X-force
Y_FORCE Y-force
Z_FORCE Z-force
R_FORCE R-force
X_RES Total X-force
Y_RES Total Y-force
Z_RES Total Z-force
X_MOMENT X-moment
Y_MOMENT Y-moment
Z_MOMENT Z-moment
R_MOMENT R-moment
SWFORC
Keyword Description
AXIAL Axial force
SHEAR Shear force
The table contains component numbers for element variables. These can be specified in the “Dyna”
interface command to extract response variables. By adding 100, 200, 300 and 400 to component numbers 1
through 16, component numbers for infinitesimal strains, Green-St. Venant strains, Almansi strains and
strain rates are obtained, respectively.
303
APPENDIX B:LS-DYNA BINARY RESULT COMPONENTS
ABSTAT
DynaASCII Binout Description
Keyword Component
VOLUME volume Volume
PRESSURE pressure Pressure
I_ENER internal_energy Internal energy
IN_FLOW_RATE dm_dt_in Input mass flow rate
OUT_FLOW_RATE dm_dt_out Output mass flow rate
MASS total_mass Mass
TEMP gas_temp Temperature
DENSITY density Density
AREA surface_area Area
- reaction Reaction
305
APPENDIX C:LS-DYNA BINOUT RESULT FILE AND COMPONENTS
BNDOUT
DynaASCII Binout Description
Keyword Component
DEFORC
DynaASCII Binout Description
Keyword Component
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
R_FORCE resultant_force Resultant force
- displacement Change in length
ELOUT
DynaASCII Binout Description
Keyword Component
Binout subdirectory solid
BXX_STRESS sig_xx XX-stress
BYY_STRESS sig_xy YY-stress
BZZ_STRESS sig_yy ZZ-stress
BXY_STRESS sig_yz XY-stress
BYZ_STRESS sig_zx YZ-stress
BZX_STRESS sig_zz ZX-stress
YIELD yield Yield function
BE_STRESS effsg Effective stress
BPRESSURE - Pressure
BMAX_SHEAR - Maximum shear stress
BMAX_P_STRESS - Maximum principal stress
BMIN_P_STRESS - Minimum principal stress
- eps_xx XX-strain
- eps_xy YY-strain
- eps_yy ZZ-strain
- eps_yz XY-strain
- eps_zx YZ-strain
- eps_zz ZX-strain
Binout subdirectory beam
AXIAL axial Axial force resultant
S_SHEAR shear_s s-Shear resultant
T_SHEAR shear_t t-Shear resultant
S_MOMENT moment_s s-Moment resultant
T_MOMENT moment_t t-Moment resultant
TORSION torsion Torsional resultant
SIG_11 - σ11
SIG_12 - σ12
SIG_31 - σ31
PLASTIC - Plastic strain
ELOUT
DynaASCII Binout Description
Keyword Component
Binout subdirectory shell – stress components
XX_STRESS sig_xx XX-stress
YY_STRESS sig_yy YY-stress
ZZ_STRESS sig_zz ZZ-stress
XY_STRESS sig_xy XY-stress
YZ_STRESS sig_yz YZ-stress
ZX_STRESS sig_zx ZX-stress
P_STRAIN plastic_strain Plastic strain
PRESSURE - Pressure
E_STRESS - Effective stress
MAX_SHEAR - Maximum shear stress
MAX_P_STRESS - Maximum principal stress
MIN_P_STRESS - Minimum principal stress
Binout subdirectory shell – strain components
XX_STRAIN upper_eps_xx XX-strain
lower_eps_xx
YY_STRAIN upper_eps_yy YY-strain
lower_eps_yy
ZZ_STRAIN upper_eps_zz ZZ-strain
lower_eps_zz
XY_STRAIN upper_eps_xy XY-strain
lower_eps_xy
YZ_STRAIN upper_eps_yz YZ-strain
lower_eps_yz
ZX_STRAIN upper_eps_zx ZX-strain
lower_eps_zx
E_STRAIN - Effective strain
MAX_S_STRAIN - Maximum shear strain
MAX_P_STRAIN - Maximum principal strain
MIN_P_STRAIN - Minimum principal strain
GCEOUT
DynaASCII Binout Description
Keyword Component
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
R_FORCE force_magnitude Force magnitude
X_MOMENT x_moment X-moment
Y_MOMENT y_moment Y-moment
Z_MOMENT z_moment Z-moment
R_MOMENT moment_magnitude Moment magnitude
Global Statistics
GLSTAT
DynaASCII Binout Description
Keyword Component
K_ENER kinetic_energy Kinetic energy
I_ENER internal_energy Internal energy
T_ENER total_energy Total energy
RATIO energy_ratio Ratio
SW_ENER stonewall_energy Stonewall energy
D_ENER spring_and_damper_energy Spring & Damper energy
HG_ENER hourglass_energy Hourglass energy
SI_ENER sliding_interface_energy Sliding interface energy
EW_ENER external_work External work
X_VEL global_x_velocity Global x-velocity
Y_VEL global_y_velocity Global y-velocity
Z_VEL global_z_velocity Global z-velocity
T_VEL - Velocity
- system_damping_energy System damping energy
- energy_ratio_wo_eroded Energy ratio w/o eroded
- eroded_internal_energy Eroded internal energy
- eroded_kinetic_energy Eroded kinetic energy
JNTFORC
DynaASCII Binout Description
Keyword Component
Binout subdirectory joints
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
X_MOMENT x_moment X-moment
Y_MOMENT y_moment Y-moment
Z_MOMENT z_moment Z-moment
R_FORCE resultant_force R-force
R_MOMENT resultant_moment R-moment
Binout subdirectory type0
- d(phi)_dt d(phi)/dt
- d(psi)_dt d(psi)/dt (degrees)
- d(theta)_dt d(theta)/dt (degrees)
- joint_energy joint energy
- phi_degrees phi (degrees)
- phi_moment_damping phi moment-damping
- phi_moment_stiffness phi moment-stiffness
- phi_moment_total phi moment-total
- psi_degrees psi (degrees)
- psi_moment_damping psi-moment-damping
- psi_moment_stiffness psi-moment-stiffness
- psi_moment_total psi-moment-total
- theta_degrees theta (degrees)
- theta_moment_damping theta-moment-damping
- theta_moment_stiffness theta-moment-stiffness
- theta_moment_total theta-moment-total
Material Summary
MATSUM
DynaASCII Binout Description
Keyword Component
K_ENER kinetic_energy Kinetic energy
I_ENER internal_energy Internal energy
X_MOMENTUM x_momentum X-momentum
Y_MOMENTUM y_momentum Y-momentum
Z_MOMENTUM z_momentum Z-momentum
MOMENTUM - Momentum
XRB_VEL x_rbvelocity X-rigid body velocity
YRB_VEL y_rbvelocity Y-rigid body velocity
ZRB_VEL z_rbvelocity Z-rigid body velocity
RB_VEL - Rigid body velocity
TK_ENER - Total kinetic energy
TI_ENER - Total internal energy
- hourglass_energy -
NCFORC
DynaASCII Binout Description
Keyword Component
Binout subdirectory master_00001 and slave_00001
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
R_FORCE - Resultant Force
PRESSURE pressure Pressure
- x X coordinate
- y Y coordinate
- z Z coordinate
NODOUT
DynaASCII Binout Description
Keyword Component
X_DISP x_displacement X-displacement
Y_DISP y_displacement Y-displacement
Z_DISP z_displacement Z-displacement
R_DISP - Resultant displacement
X_VEL x_velocity X-velocity
Y_VEL y_velocity Y-velocity
Z_VEL z_velocity Z-velocity
R_VEL - Resultant velocity
X_ACC x_acceleration X-acceleration
Y_ACC y_acceleration Y-acceleration
Z_ACC z_acceleration Z-acceleration
R_ACC - Resultant acceleration
- x_coordinate X-coordinate
- y_coordinate Y-coordinate
- z_coordinate Z-coordinate
Rotational components
RX_DISP rx_acceleration XX-rotation
RY_DISP rx_displacement YY-rotation
RZ_DISP rx_velocity ZZ-rotation
RX_VEL ry_acceleration XX-rotational velocity
RY_VEL ry_displacement YY-rotational velocity
RZ_VEL ry_velocity ZZ-rotational velocity
RX_ACC rz_acceleration XX-rotational acceleration
RY_ACC rz_displacement YY-rotational acceleration
RZ_ACC rz_velocity ZZ-rotational acceleration
Injury coefficients
CSI CSI Chest Severity Index
HIC15 HIC15 Head Injury Coefficient (15 ms)
HIC36 HIC36 Head Injury Coefficient (36 ms)
Nodal Forces
NODFOR
DynaASCII Binout Description
Keyword Component
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
R_FORCE - Resultant force
X_TOTAL x_total X-total force
Y_TOTAL y_total Y-total force
Z_TOTAL z_total Z-total force
R_TOTAL - Total resultant force
- x_local -
- y_local -
- z_local -
- energy Energy
- etotal Total Energy
RBDOUT
DynaASCII Binout Description
Keyword Component
X_DISP global_dx X-displacement
Y_DISP global_dy Y-displacement
Z_DISP global_dz Z-displacement
R_DISP - Resultant displacement
X_VEL global_vx X-velocity
Y_VEL global_vy Y-velocity
Z_VEL global_vz Z-velocity
R_VEL - Resultant velocity
X_ACC global_ax X-acceleration
Y_ACC global_ay Y-acceleration
Z_ACC global_az Z-acceleration
R_ACC - Resultant acceleration
- global_x X-coordinate
- global_y Y-coordinate
- global_z Z-coordinate
- local_dx Local X-displacement
- local_dy Local Y-displacement
- local_dz Local Z-displacement
- local_vx Local X-velocity
- local_vy Local Y-velocity
- local_vz Local Z-velocity
- local_ax Local X-acceleration
- local_ay Local Y-acceleration
- local_az Local Z-acceleration
RBDOUT
Rotational components
DynaASCII Binout Description
Keyword Component
RX_DISP global_rax X-rotation
RY_DISP global_ray Y-rotation
RZ_DISP global_raz Z-rotation
RX_VEL global_rdx X-velocity
RY_VEL global_rdy Y-velocity
RZ_VEL global_rdz Z-velocity
RX_ACC global_rvx X-acceleration
RY_ACC global_rvy Y-acceleration
RZ_ACC global_rvz Z-acceleration
- local_rdx Local X-rotation
- local_rdy Local Y-rotation
- local_rdz Local Z-rotation
- local_rvx Local X-velocity
- local_rvy Local Y-velocity
- local_rvz Local Z-velocity
- local_rax Local X-acceleration
- local_ray Local Y-acceleration
- local_raz Local Z-acceleration
Direction cosines
- dircos_11 11 direction cosine
- dircos_12 12 direction cosine
- dircos_13 13 direction cosine
- dircos_21 21 direction cosine
- dircos_22 22 direction cosine
- dircos_23 23 direction cosine
- dircos_31 31 direction cosine
- dircos_32 32 direction cosine
- dircos_33 33 direction cosine
Injury coefficients
CSI CSI Chest Severity Index
HIC15 HIC15 Head Injury Coefficient (15 ms)
HIC36 HIC36 Head Injury Coefficient (36 ms)
RCFORC
DynaASCII Binout Description
Keyword Component
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
R_FORCE - Resultant force
XS FORCE - X-slave force
YS FORCE - Y-slave force
ZS FORCE - Z-slave force
RS FORCE - R-slave force
- mass Mass
RigidWall Forces
RWFORC
DynaASCII Binout Description
Keyword Component
Binout subdirectory forces
NORMAL normal_force normal
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
Section Forces
SECFORC
DynaASCII Binout Description
Keyword Component
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
X_MOMENT x_moment X-moment
Y_MOMENT y_moment Y-moment
Z_MOMENT z_moment Z-moment
X_CENTER x_centroid X-center
Y_CENTER y_centroid Y-center
Z_CENTER z_centroid Z-center
R_FORCE total_force Resultant force
R_MOMENT total_moment Resultant moment
- area Area
SPCFORC
DynaASCII Binout Description
Keyword Component
X_FORCE x_force X-force
Y_FORCE y_force Y-force
Z_FORCE z_force Z-force
R_FORCE - Resultant force
X_RES x_resultant Total X-force
Y_RES y_resultant Total Y-force
Z_RES z_resultant Total Z-force
X_MOMENT x_moment X-moment
Y_MOMENT y_moment Y-moment
Z_MOMENT z_moment Z-moment
R_MOMENT - Resultant moment
SWFORC
DynaASCII Binout Description
Keyword Component
AXIAL axial Axial force
SHEAR shear Shear force
- failure_flag Failure flag
Database files
This file appears in the solver directory and is used to save the experimental point coordinates for the
analysis runs. The file consists of lines having the following format repeated for each experimental point.
where x[1] to x[n] are the values of the n solver design variables at the experimental point.
This file is used to save the responses at the experimental points and appears in the solver directory. Every
line describes an experimental point and gives the response values at the experimental point. The file
consists of lines having the following format repeated for each experimental point.
where x[1] to x[n] are the values of the n solver design variables at the experimental point. RespVal[1] to
RespVal[m] are the values of the m solver responses. Values of 2.0*1030 are assigned to responses of
321
APPENDIX D:DATABASE FILES
simulations with error terminations. The AnalysisResults file is synchronous with the
Experiments file.
The DesignFunctions file, which appears in the solver directory, is used to save a description of the
polynomial design functions:
Entities Remark
Number of variables = n n rows
Lower bound, Upper bound
Number of responses (m) Lines below repeated m times
Response type Number of constants in a row
Response name (Number of constants –1) in a row
Response command
Solver ID
Unused
Unused
Unused
Polynomial constants
Flags for active constants
Example:
2
2.0000000000000001e-01 4.0000000000000000e+00
1.0000000000000001e-01 1.6000000000000001e+00
2
79
Weight
get_wt
0
0
1.0000000000000000e+30
-1.0000000000000000e+30
1.7313148666666667e-01 9.0171633333333390e-01 -8.4697225964912348e-01 …
-1.5711848567878486e-16 5.8787263157894765e-01 4.9821866666666648e-01
1 1 1 1 1
0
79
Stress
get_str
0
0
1.0000000000000000e+30
This file is used to save the optimization history results and appears in the work directory. Each line
contains the values at the optimum point of an iteration.
Entities Count
Objective values Number of objectives
Variables Number of variables
Variable lower bounds Number of variables
Variable upper bounds Number of variables
RMS errors Number of responses
Average errors Number of responses
Maximum errors Number of responses
R2 errors Number of responses
Adjusted R2 errors Number of responses
PRESS errors Number of responses
Prediction R2 Number of responses
Maximum prediction error Number of responses
Responses Number of responses
Multi-objective 1
Constraint values Number of constraints
Composite values Number of composites
Responses (computed) Number of responses
Max. constraint violation 1
Composites (computed) Number of composites
Constraints (computed) Number of constraints
Objectives (computed) Number of objectives
Multi-objective (computed) 1
Max. constraint violation (computed) 1
Constants Number of constants
Dependents Number of dependents
This file contains all points represented in the AnalysisResults file and appears in the solver directory.
All values are based on the simulation results. A line has the following format:
Entities Count
Objective weights Number of objectives
Objective values Number of objectives
Variables Number of solver variables
Responses Number of solver responses
Multi-objective 1
Constraint values Number of constraints
Composite values Number of composites
Max. constraint violation 1
Constants Number of constants
Dependents Number of dependents
This file contains just the optimum design point data and appears in the solver directory. All values are
metamodel values, i.e. interpolated.
Entities Count
Objective weights Number of objectives
Objective values Number of objectives
Variables Number of variables
Responses Number of responses
Multi-objective 1
Constraint values Number of constraints
Composite values Number of composites
Max. constraint violation 1
Constants Number of constants
Dependents Number of dependents
Mathematical Expressions
Mathematical expressions are available for the following entities:
dependent
history
response
composite
multiobjective
325
APPENDIX E: MATHEMATICAL EXPRESSIONS
Expression Symbols
Integral(expression[,t_lower,t_upper,variable]) b
∫a
f (t )dg (t )
Derivative(expression[,T_constant]) ∆f / ∆t|t =T ~ df / dt|t =T
Min(expression[,t_lower,t_upper]) f min = min[ f (t )]
t
Max(expression[,t_lower,t_upper]) f max = max[ f (t )]
t
Initial(expression) First function value on record
Final(expression) Last function value on record
Lookup(expression,value) Inverse function t(f = F)
LookupMin(expression[,t_lower,t_upper]) Inverse function t(f = fmin)
LookupMax(expression[,t_lower,t_upper]) Inverse function t(f = fmax)
“Generic” implies that the quantity can be an expression, a previously defined entity or a constant number.
An entity (which may be specified in an expression) can be any previously defined LS-OPT entity. Thus
constant, variable, dependent, history, response and composite are acceptable. An
expression is given in double quotes, e.g., ”Displacement(t)”.
Omitting the lower and upper bounds implies operation over the entire available history.
The Lookup function allows finding the value of t for a specified value of f(t) = F. If such a value cannot
be found, the largest value of t in the history is returned. The LookupMin and LookupMax functions
return the value of t at the minimum or maximum respectively.
The implied variable represented in the first column of any history file is t. Therefore all history files
produced by the DynaASCII extraction command contain functions of t. The fourth argument of the
Integral function defaults to t. The variable t must increase monotonically.
The derivative assumes a piecewise linear function defined by the points in the history.n file. T_constant in
the Derivative function defaults to the end time.
If a time is specified smaller than the smallest time value of the computed history, the first value is returned
(same as Initial). If a time is specified larger than the largest time value of the computed history, the
last value is returned (same as Final). For derivatives the first or last slopes are returned respectively.
• The variable fdstepsize is used to find the gradients of expression composite functions. These
are used in the optimization process.
• The historysize is used when new histories are generated.
The parameter type represents the highest entity in the hierarchy. Thus constants are included in the variable
parameters.
In LS-OPT, expressions can be entered for variables, constants, dependents, histories, responses constraints
and objectives.
Example:
The following example shows a simple evaluation of variables and functions. The histories are specified in
plot files his1 and his2. A third function his3 is constructed from the files by averaging.
File his1:
0 0.0
100 1000
200 500
300 500
File his2:
0 0.0
100 2000
200 2000
300 2000
Input file:
"Mathematical Expressions"
$
$ CONSTANTS
$
constants 3
constant ’lowerlimit’ 0
constant ’upperlimit’ .200
constant ’angle’ 30
$
$ DESIGN VARIABLE DEFINITIONS
$
variables 2
Variable ’x1’ 45
Lower bound variable ’x1’ -10
Upper bound variable ’x1’ 50
Variable ’x2’ 45
Lower bound variable ’x2’ -10
Upper bound variable ’x2’ 50
$
$ DEPENDENT VARIABLES
$
dependents 2
dependent ’ll’ {lowerlimit * 1000}
dependent ’ul’ {upperlimit * 1000}
$
.
.
.
Example 2:
constant ’v0’ 15.65
$----------------------------------------------------------------------------
$ Extractions
$----------------------------------------------------------------------------
history ’engine_velocity’ "DynaASCII nodout X_VEL 73579 TIMESTEP 0.0 SAE 30"
history ’Apillar_velocity_1’ "DynaASCII nodout X_VEL 41195 TIMESTEP 0.0 SAE 30"
history ’Apillar_velocity_2’ "DynaASCII nodout X_VEL 17251 TIMESTEP 0.0 SAE 30"
history ’global_velocity’ "DynaASCII glstat X_VEL 0 TIMESTEP 0.0"
$----------------------------------------------------------------------------
$ Mathematical Expressions for dependent histories
$----------------------------------------------------------------------------
history ’Apillar_velocity_average’ {(Apillar_velocity_1 +
Apillar_velocity_2)/2}
$
$ Find the time when the engine velocity = 0
$
response ’time_to_engine_zero’ expression {Lookup("engine_velocity(t)",0)}
$
$ Find the average velocity at time of engine velocity = 0
$
response ’vel_A_engine_zero’ expression {Apillar_velocity_average
(time_to_engine_zero)}
$
$ Integrate the average A-pillar velocity up to zero engine velocity
$ Divide by the time to get the average
$
response ’PULSE_1’ expression {Integral
("Apillar_velocity_average(t)",
0,
time_to_engine_zero
)
/time_to_engine_zero}
$
$ Find the time at which the global velocity is zero
$
response ’time_to_zero_velocity’ expression {Lookup("global_velocity(t)",0)}
$
$ Find the average A-pillar velocity where global velocity is zero
$
response ’velocity_final’ {Apillar_velocity_average(time_to_zero_velocity)}
response ’PULSE_2’ expression {Integral
Simulated Annealing
The Simulated Annealing (SA) algorithm for global optimization can be viewed as an extension to local
stochastic optimization techniques. The basic idea is very simple. SA takes a (biased) random walk through
the space and aims to find a global optimum from among multiple local solutions. In trying to minimize a
function, instead of always going downhill, SA algorithm goes downhill most of the time. It means that the
SA process sometimes goes uphill. This allows simulated annealing to move consistently towards lower
function values, yet still 'jump' out of local minima and globally explore different states of the optimized
system. The SA algorithm was first formulated for various combinatorial problems, [28]. The approach was
later extended to continuous optimization problems. In [42] the simulated annealing algorithm was adopted
to search for optimal Latin hypercube designs.
The term 'simulated annealing' derives from the rough analogy of the way that the liquids freeze and
crystallize, or metals cool and anneal, starting at a high temperature, [28]. When the liquid is hot, the
molecules move freely, and very many changes of energy can occur. When the liquid is cooled, this thermal
mobility is partially lost. If the rate of cooling is sufficiently slow, the atoms are often able to line
themselves up and form a pure crystal, which is the state of minimum (most stable) energy for this physical
system. If a liquid metal is cooled quickly or 'quenched', it usually does not reach this state but rather ends
up in a polycrystalline or amorphous state having somewhat higher energy. So the essence of the whole
process is slow cooling.
Nature's minimization algorithm is based on the fact that a system in thermal equilibrium at temperature T
has its energy, E, probabilistically distributed among all different energy states as determined by the
Boltzmann distribution:
Hence, even at low temperature, there is a chance, albeit very small, of a system being in a high-energy
state. This slight probability of choosing a state that gives higher energy is what allows the physical system
to get out of local (i.e. amorphous) minima in favor of finding a better, more stable, orientation. The
quantity κB (Boltzmann's constant) is a constant of nature that relates temperature to energy.
In simulated annealing algorithm parlance, the objective function of the optimization problem is often called
'energy'. The optimization algorithm proceeds in small iterative steps. At each iteration, SA algorithm
randomly generates a candidate state and, through a random mechanism (controlled by a parameter called
temperature in view of the analogy with the physical process) decide whether to move to the candidate state
or to stay in the current one at the next iteration. More formally, a general SA algorithm can be described as
follows.
335
APPENDIX F: SIMULATED ANNEALING
Step 0. Let x(0) ∈ X be a given starting state of the optimized system, E = E(x).
Start the sequence of observed states: X(0)={x(0)}.
Set the starting temperature T(0) to a high value: T(0) = Tmax, and initialize the counter of iterations to k = 0.
Step 1. Sample a point x' from the candidate distribution, D( X(k) ), and set X(k+1) = X(k) U {x').
The sequence X(k+1) contains all the states observed up to iteration k.
Step 3. Apply the cooling schedule to the temperature, i.e. set T(k+1) = C( X(k+1), T(k) ).
Step 4. Check a stopping criterion and if it fails set k := k+1 and go back to Step 1.
The distribution of the next candidate state, D, the acceptance function, A, the cooling schedule, C, and the
stopping criterion must be specified in order to define the SA algorithm. Appropriate choices are essential to
guarantee the efficiency of the algorithm. Many different definitions of the above entities have been given in
the existing literature about SA. These will be discussed in the next few paragraphs, trying to emphasize
some key ideas that have driven the choices of the researches in this field.
In the existing literature about SA algorithms very few acceptance functions have been employed. In most
cases the acceptance function is the so-called Metropolis function:
1
A( x ′, x, T ) = (E.5)
exp(( E ( x ′) − E ( x))
1 +
T
The theoretical motivation for such a restricted choice of acceptance functions can be found in [55]. It is
shown that under appropriate assumptions, many acceptance functions, which share some properties, are
equivalent to (E.4) or (E.5) after a monotonic transformation of the temperature T.
Due to the difficult nature of the problems solved by SA algorithms, it is hard, if not impossible, to define a
general stopping rule, which guarantees to stop when the global optimum has been detected or when there is
a sufficiently high probability of having detected it. Thus the stopping rules proposed in the literature about
SA all have a heuristic nature and are, in fact, more problem dependent than SA algorithm dependent. These
The choice of the next candidate distribution and the cooling schedule for the temperature are typically the
most important (and strongly interrelated) issues in the definition of a SA algorithm. The next candidate
state, x', is usually selected randomly among all the neighbors of the current solution, x, with the same
probability for all neighbors. However, with a complicated neighbor structure, a non-uniformly random
selection might be appropriate. The choice of the size of the neighborhood typically follows the idea that
when the current function value is far from the global minimum, the algorithm should have more freedom,
i.e. larger 'step sizes' are allowed.
The basic idea of the cooling schedule is to start the algorithm off at high temperature and then gradually to
drop the temperature to zero. The primary goal is to quickly reach the so called effective temperature,
roughly defined as the temperature at which low function values are preferred but it is still possible to
explore different states of the optimized system, [7]. After that the simulated annealing algorithm lowers the
temperature by slow stages until the system 'freezes' and no further changes occur. A straightforward and
most popular strategy is to decrement T by a constant factor every νT iterations:
T := T µ T (E.6)
The value of νT should be large enough, so that 'thermal equilibrium' is achieved before reducing the
temperature. A rule of thumb is to take νT proportional to the size of neighborhood of the current solution.
Often, the cooling schedule (E.6) also provides a condition for terminating SA iterations:
Some of the convergence results for SA rely on the fact that the support of the next candidate distribution is
the whole feasible region (though in some cases the probability of sampling states far from the current one
decreases to 0 as the iteration counter increases). For these convergence results it is often only required that
the temperature decreases to 0, no matter at which rate. For some other convergence results the support of
the next candidate distribution is only a neighborhood of the current state, and to make the algorithm able to
climb the barriers separating the different local minima, it is required that the temperature decreases to 0
slowly enough.
It is clear that the selection of the initial temperature, Tmax, has a profound influence on the rate of
convergence of the SA algorithm. At temperatures much higher than the effective temperature, the
algorithm behaves very much like a random search, while at temperatures much lower than the effective
temperature it behaves like (an inefficient implementation of) a deterministic algorithm for local
optimization. Intuitively, the cooling schedule (E.6) should begin one order of magnitude higher than the
effective temperature and end one order of magnitude lower, [7].
It is difficult to give the initial temperature directly, because this value depends on the neighborhood
structure, the scale of the objective function, the initial solution, etc. In [28] a suitable initial temperature is
one that results in an average uphill move acceptance probability of about 0.8. This Tmax can be estimated by
conducting an initial search, in which all uphill moves are accepted and calculating the average objective
increase observed. In some other papers it is suggested that parameter Tmax is set to a value, which is larger
than the expected value of |E'-E| that is encountered from move to move. In [7] it is suggested to spend most
of the computational time in short sample runs with different Tmax in order to detect the effective
temperature. In practice, the optimal control of T may require physical insight and trial-and-error
experiments. According to [9], "choosing an annealing schedule for practical purposes is still something of a
black art".
Simulated annealing has proved surprisingly effective for a wide variety of hard optimization problems in
science and engineering. Many of the applications in our list of references attest to the power of the method.
This is not to imply that a serious implementation of simulated annealing to a difficult real world problem
will be easy. In the real-life conditions, the energy trajectory, i.e. the sequence of energies following each
move accepted, and the energy landscape itself can be terrifically complex. Note that state space, which
consists of wide areas with no energy change, and a few "deep, narrow valleys", or even worse, "golf-
holes", is not suited for simulated annealing, because in a "long, narrow valley" almost all random steps are
uphill. Choosing a proper stepping scheme is crucial for SA in these situations. However, experience has
shown that simulated annealing algorithms get more likely trapped in the largest basin, which is also often
the basin of attraction of the global minimum or of the deep local minimum. Anyway, the possibility, which
can always be employed with simulated annealing, is to adopt a multistart strategy, i.e. to perform many
different runs of the SA algorithm with different starting points.
Another potential drawback of using SA for hard optimization problems is that finding a good solution can
often take an unacceptably long time. While SA algorithms may detect quickly the region of the global
optimum, they often require many iterations to improve its approximation. For small and moderate
optimization problems, one may be able to construct effective procedures that provide similar results much
more quickly, especially in cases when most of the computing time is spent on calculations of values of the
objective function. But it should be noted that for the large-scale multidimensional problems an algorithm,
which always (or often) obtains a solution near the global optimum is valuable, since various local
deterministic optimization methods allow quick refinement of a nearly correct solution.
In summary, simulated annealing is a powerful method for global optimization in challenging real world
problems. Certainly, some "trial and error" experimentation is required for an effective implementation of
the algorithm. The energy (cost) function should employ some heuristic related to the problem at hand,
clearly reflecting how 'good' or 'bad' is a given solution. Random perturbations of the system state and
corresponding cost change calculations should be simple enough, so that SA algorithm can perform its
iterations very fast. The scalar parameters of the simulated annealing algorithm (Tmax, µT, νT, in particular)
have to be chosen carefully. If the parameters are chosen such that the optimization evolves too fast, the
solution converges directly to some, possibly good, solution depending on the initial state of the problem.
Glossary
ANOVA. Analysis of variance. Used to perform variable screening by identifying insignificant variables.
Variable regression coefficients are ranked based on their significance as obtained through a partial F-test.
(See also variable screening).
Bias error. The total error – the difference between the exact and computed response - is composed of a
random and a bias component. The bias component is a systematic deviation between the chosen model
(approximation type) and the exact response of the structure (FEA analysis is usually considered to be the
exact response). Also known as the modeling error. (See also random error).
Binout. The name of the binary output file generated by LS-DYNA (Version 970 onwards).
Composite function. A function constructed by combining responses and design variables into a single
value. Symbolized by F.
Concurrent simulation. The running of simulation tasks in parallel without message passing between the
tasks.
Confidence interval. The interval in which a parameter may occur with a specified level of confidence.
Computed using Student’s t-test. Typically applied to accompany the significance of a variable in the form
of an error bar.
Constraint. An absolute limit on a response variable specified in terms of an upper or lower limit.
Constrained optimization. The mathematical optimization of a function subject to specified limits on other
functions.
Conventional Design. The procedure of using experience and/or intuition and/or ad hoc rules to improve a
design.
339
APPENDIX G: GLOSSARY
Design formula. A simple mathematical expression which gives the response of a design when the design
variables are substituted. See response surface.
Design space. A region in the n-dimensional space of the design variables (x1 through xn to which the
design is limited. The design space is specified by upper and lower bounds on the design variables.
Response variables can also be used to bound the design space.
Design surface. The response variable as a function of the design variables, used to construct the
formulation of a design problem. (See also response surface, design rule).
Design sensitivity. The gradient vector of the response. The derivatives of the response function in terms of
the design variables. df /dxi.
Design variable. An independent design parameter which is allowed to vary in order to change the design.
Symbolized by (xi or x (vector containing several design variables)).
Discipline. An area of analysis requiring a specific set of simulation tools, usually because of the unique
nature of the physics involved, e.g. structural dynamics or fluid dynamics. In the context of MDO, often
used interchangeably with solver.
D-optimal. The state of an experimental design in which the determinant of the moment matrix X T X of
the least squares formulation is maximized.
Elliptic approximation. An approximation in which only the diagonal Hessian terms are used.
Experimental Design. The selection of designs to enable the construction of a design response surface.
Function. A mathematical expression for a response variable in terms of design variables. Often used
interchangeably with “response”. Symbolized by f.
Function evaluation. Using a solver to analyze a single design and produce a result. See Simulation.
Global variable. A variable of which the scope spans across all the design disciplines or solvers. Used in
the MDO context.
Global Optimization. The mathematical procedure for finding the global optimum in the design space. E.g.
Genetic Algorithm, Particle Swarm, etc.
Gradient vector. A vector consisting of the derivatives of a function f in terms of a number of variables x1
to xn. s = [df /dxi]. See Design Sensitivity.
History. Response history containing two columns of (usually time) data generated by a simulation.
Infeasible Design. A design which does not comply with the constraint functions. An entire design space or
region of interest can sometimes be infeasible.
Iteration. A cycle involving an experimental design, function evaluations of the designs, approximation and
optimization of the approximate problem.
Latin Hypercube Sampling. The use of a constrained random experimental design as a point selection
scheme for response approximation.
Least Squares Approximation. The determination of the coefficients in a mathematical expression so that
it approximates certain experimental results by the minimization of the sum of the squares of the
approximation errors. Used to determine response surfaces as well as calibrating analysis models.
Local variable. A variable of which the scope is limited to a particular discipline or disciplines. Used in the
MDO context.
Metamodeling. The construction of surrogate design models such as polynomial response surfaces,
Artificial Neural Networks or Kriging surfaces from simulations at a set of design points.
Min-Max optimization problem. An optimization problem in which the maximum value considering
several responses or functions is minimized.
Model calibration. The optimal adjustment of parameters in a numerical model to simulate the physical
model as closely as possible.
Multidisciplinary design optimization (MDO). The inclusion of multiple disciplines in the design
optimization process. In general, only some design variables need to be shared between the disciplines to
provide limited coupling in the optimization of a multidisciplinary target or objective.
Multi-objective. An objective function which is constituted of more than one objective. Symbolized by F.
Neural network approximation. The use of trained feed-forward neural networks to perform non-linear
regression, thereby constructing a non-linear response surface.
Objective. A function of the design variables that the designer wishes to minimize or maximize. If there
exists more than one objective, the objectives have to be combined mathematically into a single objective.
Symbolized by Φ .
Optimal design. The methodology of using mathematical optimization tools to improve a design iteratively
with the objective of finding the ‘best’ design in terms of predetermined criteria.
Pareto optimal. A multi-objective design is Pareto-optimal if none of the objectives can be improved
without at least one objective being affected adversely. Also referred to as functionally efficient.
Preference function. A function of objectives used to combine several objectives into a single one suitable
for the standard MP formulation.
Random error. The total error – the difference between the exact and computed response - is composed of
a random and a bias component. The random component is, as the name implies, a random deviation from
the nominal value of the exact response, often assumed to be normally distributed around the nominal value.
(See also bias error).
Reasonable design space. A subregion of the design space within the region of interest. It is bounded by
lower and upper bounds of the response values.
Reliability-based design optimization (RBDO). The performing of design optimization while considering
reliability-based failure criteria in the constraints of the design optimization formulation. This implies the
inclusion of random variables in the generation of responses and then extracting the standard deviation of
the responses about their mean values due to the random variance and including the standard deviation in
the constraint(s) calculation.
Residual. The difference between the computed response (using simulation) and the predicted response
(using a response surface).
Response Surface. A mathematical expression which relates the response variables to the design
parameters. Typically computed using statistical methods.
Response. A numerical indicator of the performance of the design. A function of the design variables
approximated using response surface methodology which can be considered for optimization. Symbolized
by f. Collected over all design iterations for plotting. (See also history).
Run directory. The directory in which the simulations are done. Two levels below the Work directory. The
run directory contains status files, the design coordinate file XPoint and all the simulation output.
Saturated design. An experimental design in which the number of points equals the number of unknown
coefficients of the approximation. For a saturated design no test can be made for the lack of fit.
Scale factor. A factor which is specified as a divisor of a response in order to normalize the response.
Sequential Random Search. An iterative method in which the best design is selected from all the
simulation results of each iteration. A Monte Carlo based point selection scheme is typically applied to
generate a set of design points.
Slack constraint. A constraint with a slack variable. The violation of this constraint can be minimized.
Slack variable. The variable which is minimized to find a feasible solution to an optimization problem, e.g.
e in: min e subject to g j ( x) ≤ e; e ≥ 0. See Strictness.
Simulation. The analysis of a physical process or entity in order to compute useful responses. See Function
evaluation.
Solver. A computational tool used to analyze a structure or fluid using a mathematical model. See
Discipline.
Solver directory. A subdirectory of the work directory that bears the name of a solver and where database
files resulting from extraction and the optimization process are stored.
Space Filling Experimental Design. A class of experimental designs that employ an algorithm to
maximize the minimum distance between any two points.
Space Mapping. A technique which uses a fine design model to improve a coarse surrogate model. The
hope is, that if the misalignment between the coarse and fine models is not too large, only a few fine model
simulations will be required to significantly improve the coarse model. The coarse model can be a response
surface.
Strictness. A number between 0 and 1 which signifies the strictness with which a design constraint must be
treated. A zero value implies that the constraint may be violated. If a feasible design is possible all
constraints will be satisfied. Used in the design formulation to minimize constraint violations. See Slack
variable.
Subproblem. The approximate design subproblem constructed using response surfaces. It is solved to find
an approximate optimum.
Successive Approximation Method. An iterative method using the successive solution of approximate
subproblems.
Target. A desired value for a response. The optimizer will not use this value as a rigid constraint. Instead, it
will try to get as close as possible to the specified value.
Template. An input file in which some of the data has been replaced by variable names, e.g.
<<Radius>>.
Transformed variables. Variables which are transformed (mapped) to a different n-space using a
functional relationship. The experimental design and optimization are performed in this space.
Variable screening. Method to remove insignificant variables from the design optimization process based
on a ranking of regression coefficients using analysis of variance (ANOVA). (See also ANOVA).
Weight. A measure of importance of a response function or objective. Typically varies between 0 and 1.
Appendix H
Note:
type values
NORMAL mu sigma
UNIFORM lower upper
USER_DEFINED_PDF filename
USER_DEFINED_CDF filename
LOGNORMAL mu sigma
WEIBULL scale shape
* value = target value for type = targeted, weight for type = weighted