Lade Inhalt...

Reliability-Based Optimization für Multiple Constraints with Evolutionary Algorithms

©2007 Diplomarbeit 101 Seiten

Zusammenfassung

Inhaltsangabe:Introduction:
In handling real-world optimization problems, it is often the case that the underlying decision variables and parameters cannot be controlled exactly as specified. For example, if a deterministic consideration of an optimization problem results in an optimal dimension of a cylindrical member to have a 50 mm diameter, there exists no manufacturing process which will guarantee the production of a cylinder having exactly a 50 mm diameter. Every manufacturing process has a finite machine precision and the dimensions are expected to vary around the specified value. Similarly, the strength of a material often does not remain fixed for the entire length of the material and is expected to vary from point to point. When such variations in decision variables and parameters are expected in practice, an obvious question arises: How reliable is the optimized design against failure when the suggested parameters cannot be adhered to? This question is important because in most optimization problems the deterministic optimum lies at the intersection of a number of constraint boundaries. Thus, if no uncertainties in parameters and variables are expected, the optimized solution is the best choice, but if uncertainties are expected, in most occasions, the optimized solution will be found to be infeasible, violating one or more constraints. These uncertainties, which are either controllable (e.g.imensions) or uncontrollable (e.g. material properties), are present and need to be accounted for in the design process.
Assuming that the variables follow a probability distribution in practice, reliability-based design optimization (RBDO) methods find a reliable solution which is feasible with a pre-specified probability. In most RBDO problems, failure probability and costs are violating objectives, which means that when one is lowered, the other may rise. Therefore, it is important to identify the uncertain variables which have an impact on the problem and describe them with different probability distributions based on statistical calculations. Then, the ordinary deterministic constraint is replaced by a stochastic constraint which is only restricting the probability of failure for a solution, not the failure itself. This can be done for each constraint or for the complete set of constraints, for the complete structure.
Different methods for evaluating the reliability of a solution exist. If the cumulative density function (CDF) with its […]

Leseprobe

Inhaltsverzeichnis


David Daum
Reliability-Based Optimization für Multiple Constraints with Evolutionary Algorithms
ISBN: 978-3-8366-1828-1
Druck Diplomica® Verlag GmbH, Hamburg, 2008
Zugl. Universität Fridericiana Karlsruhe (TH), Karlsruhe, Deutschland, Diplomarbeit,
2007
Dieses Werk ist urheberrechtlich geschützt. Die dadurch begründeten Rechte,
insbesondere die der Übersetzung, des Nachdrucks, des Vortrags, der Entnahme von
Abbildungen und Tabellen, der Funksendung, der Mikroverfilmung oder der
Vervielfältigung auf anderen Wegen und der Speicherung in Datenverarbeitungsanlagen,
bleiben, auch bei nur auszugsweiser Verwertung, vorbehalten. Eine Vervielfältigung
dieses Werkes oder von Teilen dieses Werkes ist auch im Einzelfall nur in den Grenzen
der gesetzlichen Bestimmungen des Urheberrechtsgesetzes der Bundesrepublik
Deutschland in der jeweils geltenden Fassung zulässig. Sie ist grundsätzlich
vergütungspflichtig. Zuwiderhandlungen unterliegen den Strafbestimmungen des
Urheberrechtes.
Die Wiedergabe von Gebrauchsnamen, Handelsnamen, Warenbezeichnungen usw. in
diesem Werk berechtigt auch ohne besondere Kennzeichnung nicht zu der Annahme,
dass solche Namen im Sinne der Warenzeichen- und Markenschutz-Gesetzgebung als frei
zu betrachten wären und daher von jedermann benutzt werden dürften.
Die Informationen in diesem Werk wurden mit Sorgfalt erarbeitet. Dennoch können
Fehler nicht vollständig ausgeschlossen werden, und die Diplomarbeiten Agentur, die
Autoren oder Übersetzer übernehmen keine juristische Verantwortung oder irgendeine
Haftung für evtl. verbliebene fehlerhafte Angaben und deren Folgen.
© Diplomica Verlag GmbH
http://www.diplom.de, Hamburg 2008
Printed in Germany

Acknowledgements
This thesis is a cooperation between the Institute of Applied Informatics and Formal
Description Methods (AIFB) at the University of Karlsruhe and the Department of Me-
chanical Engineering at the Indian Institute of Technology Kanpur, India. It was mainly
created during my stay at the Indian Institute of Technology Kanpur, which was finan-
cially supported by the German Academic Exchange Service (DAAD) and the Indian
Institute of Technology Kanpur.
I would like to thank my supervisors Dr. Jürgen and Prof. Kalyanmoy Deb, who sup-
ported me throughout my thesis and always had time for extensive discussions. Thanks
go also to Prof. Deb's research group at the Kanpur Genetic Algorithms Laboratory,
namely Dhish, Nikhil, Karthik, Deepak, Kapil, Swanen and all the others, who sup-
ported and helped me with technical and scientific issues. In addition they all wel-
comed me and made my stay there one-of-a-kind. Special thanks go to Dish Saxena,
who helped me with questions regarding linear algebra and with lots of fruitful discus-
sions about any scientific topic.

Kurzzusammenfassung
In dieser Arbeit wird eine Methode vorgestellt um multikriterielle evolutionäre Al-
gorithmus mit "Reliabiltiy-based Optimization" zu kombinieren und damit Unsicher-
heiten in den Design Variablen und Parametern mit zu berücksichtigen. Basierend
auf der Arbeit des zweiten Autors und seiner Forschungsgruppe, wird in dieser Arbeit
die Reliability genauer berechnet, indem sie die Verlässlichkeit der erreichten Lösung
genauer berechnet. Dies wird erreicht, indem alle Grenzen eines Optimierungsprob-
lems in die Verlässlichkeitsanalyse einbezogen werden, wobei bei der vorhergehen-
den Arbeit lediglich die kritischste Grenze berücksichtigt worden ist. Zuerst geben
wir einen Einblick in das Feld der evolutionären Algorithmen und der Optimierung.
Desweiteren zeigen wir die Verbindung von multikriteriellen Optimierungsverfahren
und "Reliability-based Optimization" auf. Das hier verwendete Prinzip der "Structural
Reliability"
wird danach erklärt sowie der daraus abgeleitete Ansatz zum erkennen in-
aktiver Grenzen. Letztendlich wenden wir unsere Methode auf eine Reihe von Test-
problemen sowie einem realen Problem aus dem Automobilbereich an.
2

Abstract
In this work, we combine reliability-based optimization with a multi-objective evolu-
tionary algorithm for handling uncertainty in decision variables and parameters. This
work is an extension of a previous study by the second author and his research group
to more accurately compute a multi-constraint reliability. This means that the overall
reliability of a solution regarding all constraints is examined, instead of a reliability
computation of only one critical constraint. First, we present a brief introduction into
the basics of evolutionary computation and multi-objective optimization. Then, we il-
lustrate the connection between multi-objective optimization and reliability-based opti-
mization, together with the so-called "structural reliability" and its key aspects. Finally,
we introduce a method for identifying inactive constraints according to the reliability
evaluation. With this method, we show that with fewer constraint evaluations, an iden-
tical solution can be achieved. Furthermore, we apply our approach to a number of
problems, including a real-world car side impact design problem, in order to illustrate
our method.
3

Contents
Kurzzusammenfassung
2
Abstract
3
Contents
4
List of figures
7
List of tables
9
1
Introduction
1
2
Related Work
4
3
Optimization
14
3.1
Single Objective Optimization . . . . . . . . . . . . . . . . . . . . . .
15
3.2
Multi-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . .
16
3.2.1
Multi-Objective Optimization Problem
. . . . . . . . . . . . .
17
3.2.2
Classical Methods . . . . . . . . . . . . . . . . . . . . . . . .
18
3.2.3
Pareto Dominance . . . . . . . . . . . . . . . . . . . . . . . .
19
3.3
Evolutionary Optimization . . . . . . . . . . . . . . . . . . . . . . . .
21
3.3.1
Biological Evolution . . . . . . . . . . . . . . . . . . . . . . .
22
3.3.2
Genetic and Evolutionary Algorithms . . . . . . . . . . . . . .
24
4

Contents
3.3.3
Evolutionary Optimization . . . . . . . . . . . . . . . . . . . .
25
3.3.4
NSGA2: A Multi Objective EA . . . . . . . . . . . . . . . . .
31
4
Reliability-based Design Optimization
34
4.1
Most Probable Point
. . . . . . . . . . . . . . . . . . . . . . . . . . .
36
4.1.1
Performance Measure Approach (PMA) . . . . . . . . . . . . .
40
4.1.2
Reliability Index Approach RIA . . . . . . . . . . . . . . . . .
41
4.2
Search Algorithm for the MPP . . . . . . . . . . . . . . . . . . . . . .
42
4.2.1
A Fast Approximation based on the RIA . . . . . . . . . . . . .
42
4.3
Reliability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
4.3.1
Simulation Methods . . . . . . . . . . . . . . . . . . . . . . .
45
4.3.2
Single-loop Methods . . . . . . . . . . . . . . . . . . . . . . .
46
4.3.3
Double-loop Methods
. . . . . . . . . . . . . . . . . . . . . .
47
4.3.4
Decoupled Methods
. . . . . . . . . . . . . . . . . . . . . . .
47
5
Structural Reliability
50
5.1
Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
5.2
Proposed Active Constraint Approach . . . . . . . . . . . . . . . . . .
52
5.3
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
6
Multi-Objective Reliability-Based Optimization
57
6.1
Fixed Reliability with Multiple Objectives . . . . . . . . . . . . . . . .
57
6.2
Reliability as an Objective . . . . . . . . . . . . . . . . . . . . . . . .
58
7
Simulation Results
60
7.1
Two-Variable Test Problem . . . . . . . . . . . . . . . . . . . . . . . .
60
7.2
Two-Variable Multi-Modal Test Problem . . . . . . . . . . . . . . . . .
65
7.3
A Car Side-Impact Problem . . . . . . . . . . . . . . . . . . . . . . . .
67
7.4
Multi-Objective Optimization for a Specified Reliability
. . . . . . . .
72
7.4.1
Two-Objective Car Side-Impact Problem . . . . . . . . . . . .
73
5

Contents
8
Conclusions
77
8.1
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
8.2
Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
Bibliography
81
6

List of Figures
3.1
(a) The -Constraint Method (b) The Weighted Sum Method . . . . . .
20
3.2
(a) x
1
dominates the rest of the solutions (b) P' is the non-dominated
set and P the set of solutions . . . . . . . . . . . . . . . . . . . . . . .
21
3.3
Scheme fot the NSGA II algorithm.
. . . . . . . . . . . . . . . . . . .
32
4.1
Due to the fluctuation of x
1
and x
2
, the best solution is inside the feasi-
ble area, though the fitness is less compared to the deterministic optimum. 35
4.2
The transformation into U-Space . . . . . . . . . . . . . . . . . . . . .
37
4.3
The concept of MPP
. . . . . . . . . . . . . . . . . . . . . . . . . . .
38
4.4
The PMA approach . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.5
The RIA approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.6
The direct connection between the reliability and . . . . . . . . . . .
39
4.7
Gradient based approximate method for solving the RIA problem. . . .
43
4.8
The Double-loop Method . . . . . . . . . . . . . . . . . . . . . . . . .
47
4.9
A specific decoupled method (SORA) (Du and Chen 2001). Initial
value of s
j
is set equal to zero for all j. . . . . . . . . . . . . . . . . . .
49
5.1
(a) FORM procedure with linear approximation at the MPP. The third
constraint does not influence the feasible area. (b) The third constraint
bounds the feasible area. . . . . . . . . . . . . . . . . . . . . . . . . .
53
7

List of Figures
5.2
Two configurations of inactive constraints: (a) the constraints are paral-
lel and the second one does not contribute failure probability, (b) the
second constraint is also defined as inactive since the added failure
probability is negligible.
. . . . . . . . . . . . . . . . . . . . . . . . .
55
6.1
Connection between the different reliability levels.
. . . . . . . . . . .
59
7.1
(a) Simulation results for the consideration of one and of all constraints.
(b) The difference of the most reliable solution in both population with
the corresponding real reliability. . . . . . . . . . . . . . . . . . . . . .
63
7.2
Comparison between the expected reliability and the real reliability in
the single-constraint and all-constraint case . . . . . . . . . . . . . . .
63
7.3
Ratio between the real and the expected failure probability . . . . . . .
64
7.4
Comparison of the users benefit between the two approaches . . . . . .
65
7.5
(a) Contour of the objective landscape of the test problem with four
local optima. (b) Simulation results from the multi-modal test problem.
67
7.6
Comparison between the proposed method where only active constraints
are evaluated, where all constraints are evaluated, and where only the
most critical constraint is evaluated. . . . . . . . . . . . . . . . . . . .
70
7.7
Distance from the solution to the MPP. . . . . . . . . . . . . . . . . . .
71
7.8
Impact of the weight on the variables. . . . . . . . . . . . . . . . . . .
72
7.9
Impact of the reliability on the variables. . . . . . . . . . . . . . . . . .
73
7.10 Two-objective Pareto-optimal fronts for different specified reliabilities. .
75
7.11 Only one constraint affects the solution in a significant manner. . . . . .
76
7.12 Here the variables with = 2 are plotted against the weight. . . . . . .
76
8

List of Tables
7.1
Comparison of the number of MPP evaluations. . . . . . . . . . . . . .
67
7.2
Correlation matrix with angle . . . . . . . . . . . . . . . . . . . . . .
74
9

1 Introduction
In handling real-world optimization problems, it is often the case that the underlying
decision variables and parameters cannot be controlled exactly as specified. For exam-
ple, if a deterministic consideration of an optimization problem results in an optimal
dimension of a cylindrical member to have a 50 mm diameter, there exists no manu-
facturing process which will guarantee the production of a cylinder having exactly a
50 mm diameter. Every manufacturing process has a finite machine precision and the
dimensions are expected to vary around the specified value. Similarly, the strength of a
material often does not remain fixed for the entire length of the material and is expected
to vary from point to point. When such variations in decision variables and parame-
ters are expected in practice, an obvious question arises: How reliable is the optimized
design against failure when the suggested parameters cannot be adhered to? This ques-
tion is important because in most optimization problems the deterministic optimum lies
at the intersection of a number of constraint boundaries. Thus, if no uncertainties in
parameters and variables are expected, the optimized solution is the best choice, but if
uncertainties are expected, in most occasions, the optimized solution will be found to
be infeasible, violating one or more constraints. These uncertainties, which are either
controllable (e.g. dimensions) or uncontrollable (e.g. material properties), are present
and need to be accounted for in the design process.
Assuming that the variables follow a probability distribution in practice, reliability-
based design optimization (RBDO) methods find a reliable solution which is feasi-
1

1 Introduction
ble with a pre-specified probability (Ditlevsen and Madsen 1996; Deb, Padmanabhan,
Gupta, and Mall 2007).
In most RBDO problems, failure probability and costs are violating objectives, which
means that when one is lowered, the other may rise. Therefore, it is important to identify
the uncertain variables which have an impact on the problem and describe them with
different probability distributions based on statistical calculations. Then, the ordinary
deterministic constraint is replaced by a stochastic constraint which is only restricting
the probability of failure for a solution, not the failure itself. This can be done for each
constraint or for the complete set of constraints, for the complete structure.
Different methods for evaluating the reliability of a solution exist. If the cumulative
density function (CDF) with its borders is integrable, the reliability can be calculated
analytically and serve directly as an input for the optimization. Unfortunately, most
problems include complex distributions with complex constraints, which makes it im-
possible to calculate the exact value. One straightforward method is to use Monte-Carlo
simulation; however, this gets computationally expensive when the desired reliability
is very high. As engineering technology advances, many real-world design problems
include complex and expensive calculations like simulation processes as finite element
(FEA) or computational fluid dynamics (CFD). Since the constraint functions for every
sample have to be evaluated, even a small sample size becomes impractical due to the
computational burden.
A more common and faster approach is the evaluation of reliability with first- or second-
order reliability methods (FORM/SORM), which are based on linear and quadratic ap-
proximations of the constraint functions (Madsen, Krenk, and Lind 1986; Rackwitz
2001).
In this work, we use an approach based on the first-order approximation FORM of the
constraint function. For the reliability analysis, we include all constraints for reaching a
2

1 Introduction
high level of accuracy. Furthermore, we propose a method for identifying inactive and
active constraints in terms of reliability which increases the computational efficiency.
This work is organized as follows: In Chapter 2, we give an overview of the related work
and the basic literature. Chapter 3 provides a short introduction to single and multi-
objective optimization together with the foundations of genetic algorithms. Chapter 4
gives an overview of reliability-based optimization and introduces some basic concepts
for reliability analysis via first and second order approximation. Based on the evalu-
ation of more than one constraint the concept of structural reliability is introduced in
Chapter 5, including our approach to identify active and inactive constraints regarding
the reliability. Chapter 6 reports about the combination of reliability-based optimization
and multi-objective optimization. In Chapter 7, we show the results of our test cases
and also of a real-world engineering problem. The summary and an outlook on future
research are given in Chapter 8.
3

2 Related Work
Deterministic design optimization does not consider uncertainties present in the man-
ufacturing process, design simulation, manufacturing or the design variables. The re-
sulting deterministic optimal solution usually has a high probability of failure: i.e., it is
unreliable and of no practical use. This means that if uncertainties in design optimiza-
tion are present, they have to be taken into account during the optimization process.
Possible sources of uncertainties are the fitness function and the design variables and/or
the external parameters. A fitness function can be noisy, may change with time or only
an approximation may be available, and the variables may fluctuate following some
distribution. The probabilistic design methods can be separated into two groups, the
reliability-based design and the robust design. Reliability-based design e.g. (Deb, Pad-
manabhan, Gupta, and Mall 2007) has the goal of finding the best solution that satisfies
the constraints with a specified probability, while robust design e.g. (Dunsmore, Pitts,
Lewis, Sexton, Please, and Carden 1997; Du 2001) usually is interested in optimizing
the expected mean performance of the solution. For all the different cases of uncer-
tainties additional measures have to be taken to allow the Evolutionary Algorithm (EA)
or any other optimizer to generate adequate solutions. A review of optimization in
uncertain environments can be found in Jin and Branke (2005).
The most challenging issue for implementing probabilistic design methods is associated
with the uncertainty analysis. For that reason, the focus of research in this area was
mainly on the evaluation and prediction of the uncertainty. In recent years, there is a
4

2 Related Work
growing interest in combining Evolutionary Algorithms with both kinds of uncertainty,
robust- and reliability-based optimization e. g. (Forouraghi 2000; Coit and Smith 1996;
Ramirez-Rosado and Bernal-Agustin 2001; Deb and Gupta 2006; Deb, Padmanabhan,
Gupta, and Mall 2007).
First, we give a short overview of the existing literature and also a short introduction
into robust optimization. Then we will more closely examine work connected with
reliability.
While reliability usually is defined as the probability with which the solution is feasible,
there exist many different definitions of robustness, including good expected perfor-
mance, a good worst-case performance, low variability in performance or a large range
of disturbances still leading to good solutions (Branke 2002). What kind of robustness
is adequate depends on the problem and kind of application. However, most of the
publications in robust evolutionary optimization deal with the expected performance as
a measure for robustness, whereby the problem for the optimizer is reduced to a sin-
gle objective problem. Thus the uncertainty analysis is a key factor for the optimization
which also is reflected by the numerous approaches which have already been published.
These include amongst others:
· Averaging over multiple samples: Averaging over multiple samples: These
Monte Carlo (MC) procedures are a straightforward method which can be applied
to the robustness analysis. By taking samples around the solution, an estimate of
the average fitness of the solution can be obtained (Greiner 1996; Thompson
1998; Sebald and Fogel 1992; McIlhagga, Husbands, and Ives 1996). Although
these methods are used frequently, the main drawback is that they need high num-
bers of samples to get good estimates, which is not feasible for some practical
problems due to the computational cost.
· Variance reduction techniques: One way of reducing the necessary sample size
5

2 Related Work
is to use derandomized sampling procedures, such as Latin Hypercube Sampling
(LHS) (Branke 2001), which reduce the variance of the estimator and allow a
more precise estimation with less samples. In Lagaros, Plevris, and Papadrakakis
(2005) the authors used an evolutionary algorithm along with a Monte Carlo Sim-
ulation based on Latin Hypercube sampling to achieve robust solutions for the
structure of a transmission tower. The authors also compare MC-sampling to
LHS and found that LHS requires significantly less samples than MC. Lagaros
and Papadopoulos (2006) a robust solution for the shell structure is searched by
an evolutionary algorithm.
· Using the information in the population: Jin and Sendhoff (2003) combined
robust design optimization with evolutionary multi-objective optimization and
used performance and a measure for robustness as two objectives. This makes
it possible to identify the Pareto front and the trade-off between performance
and robustness. In order to measure the robustness, they exploit the information
which is already available in the current population of the evolutionary algorithm.
This has the advantage that no additional function evaluations are necessary.
There exist also approaches without sampling which are usually faster, especially if
the evaluation of the objective function is expensive. In Lim, Ong, and Lee (2005),
an inverse robust design optimization is proposed which uses worst-case as a measure
for robustness. The approach does not assume any structure of uncertain environment;
instead, the designer has to define the worst case tolerable for the solution. This makes
it especially useful if no information about the uncertainties is available.
Li, Azarm, and Aute (2005) combine a multi-objective EA with robustness by simul-
taneously optimizing a measure for the solutions' performance and a measure for the
solutions' robustness, thereby identifying the tradeoff between fitness and robustness.
Stoebner and Mahadevan (2000) studied the effect of fluctuations in the design vari-
ables on the robustness and the reliability of a system performance. They developed a
6

2 Related Work
method which combines robustness and reliability in one optimization and proposed a
method to achieve both reliability and robustness in design. In the proposed method, the
decision-maker can choose weights for the objectives to determine the optimal design.
As already mentioned, reliability is usually defined as the probability of a solution be-
ing feasible. The most obvious difference between the two kinds of uncertainties is
that for the evaluation of robustness, the objective function is most important, while
the reliability only depends on the constraints. Since most probability distributions are
non-linear or even unknown, the reliability cannot be calculated straightforwardly. As
described in the introduction there are different approaches for obtaining approxima-
tions of the reliability. There are also various ways of combining the reliability analysis
with the optimizer. The conventional approach is to perform a double loop optimization
in which the inner loop is employed for the reliability analysis and the outer loop opti-
mizes the original objective function. Recently, there is an increasing interest in single-
loop and decoupled techniques since these approaches are faster than the conventional
double loop method (Yang, Chuang, Gu, and Li 2005). A deeper investigation of those
methods is given in chapter 4.1.
The aforementioned approaches can be based on various techniques. In previous pub-
lications, first-order reliability approximation, second-order reliability approximation,
and Monte Carlo techniques are used most frequently. Some of those concepts require
the finding of the most probable point (MPP), which is the point on the constraint sur-
face closest to the solution.
· Averaging over multiple samples: The failure probability can be calculated with
Monte Carlo sampling techniques which create a number of samples following
the uncertainties of the design variables and evaluate them according to the con-
straint function as feasible or infeasible e.g. (Cruse 1997; Braun and Kroo 1996).
· Variance reduction techniques: As for robustness, derandomized sampling pro-
cedures have also been employed for reliability. Loughlin and Ranjithan (1999)
7

2 Related Work
used Latin Hypercube Sampling along with an evolutionary algorithm and showed
that with the same sample size LHS performed better in terms of accuracy than
the traditional Monte Carlo approach.
· Sample reduction techniques: An advancement in reducing the number of sam-
ples is importance sampling, which employs sampling around the MPP (Harbitz
1986). Another method for reduction of the sample size is directional sampling
introduced by Ditlevsen and Bjerager (1989), Bjerager (1988). Here the samples
are chosen on lines with a star shape starting in the origin of the standard normal
U-Space. In the direction of infeasible solutions, the angle between the lines is
narrowed until a defined level of accuracy is reached. The method of Fast Proba-
bility Integration (Wu 1993) is based on an importance sampling method, that can
be used to compute reliability and reliability sensitivities. It starts from an initial
approximate failure domain and proceeds adaptively and incrementally with the
goal of reaching a sampling domain that is slightly greater than the failure domain
to minimize over-sampling in the safe region.
Wang, Wang, and Shan (2005) proposed a combination of sampling and meta-
modeling. Their approach applied a discriminative sampling strategy, which gen-
erates more points close to the constraint function. Then, in the neighborhood
of the constraint function, a kriging model is built and the reliability analysis is
performed based on this model.
· First and second-order approximations: Hasofer and Lind (1974) introduced
a procedure based on first-order approximation (FORM) of the constraints at the
most probable point (MPP). By using this simplification, the reliability integral
can be calculated. This system was extended by using a second-order approx-
imation which can be more exact for some problems, especially if nonlinear
constraints are involved (Fiessler, Neumann, and Rackwitz 1979; Jackson 1982;
Madsen 1985). Also higher order approximations (Ramachandran and Baker
8

2 Related Work
1985; Hohenbichler and Rackwitz 1983) have been used for the constraints. But
the higher numerical effort results in little gain in accuracy (Rackwitz 2001). Wu
and Wirshing (1987) showed through a number of realistic physical test problems
that the errors in P
F
with a second-order approximation are consistently less than
than 10%.
As mentioned, the concept of the most probable point is applied in some approaches.
The search for this point is mostly done in the standard normal space, also called U-
Space. The mapping from the variable space into the U-space can be performed using
various transformations, for example the Rosenblatt Transformation (Rosenblatt 1952).
For finding the MPP, different approaches exist. Among others, the performance mea-
sure approach (PMA) and the reliability index approach (RIA), which both include an
optimization problem, are frequently used in recent publications. In chapter 4.3, we
provide detailed information about RIA and PMA. A comparison of these two is given
by Tu, Choi, and Park (1999), which shows that PMA is inherently robust and more
efficient in evaluating inactive probabilistic constraints while RIA is more efficient for
violated probabilistic constraints. Another comparison (Youn and Choi 2004a) shows
that with nonlinearity in the constraints RIA becomes much more difficult to solve for
non-normally distributed random parameters because of the highly nonlinear transfor-
mations that are involved. On the other hand, PMA is fairly independent of probability
distributions because it only checks whether the constraint is inside a predefined circle
around the solution or outside.
A further method is the approximate moment approach (AMA) (Youn and Choi 2004b)
which does not require information about the probability distributions of the design
variables, but requires derivative information of performance constraints with respect
to design variables and their mean value. This can be of interest if the probability
distributions of the design variables are unknown and Monte Carlo sampling is not
feasible due to the complexity of the problem.
9

2 Related Work
The concept of the MPP is the base of most approximation methods; nevertheless, it
has disadvantages regarding the computational efficiency. In order to reduce the com-
putational burden, approximation schemes for the MPP have been introduced (Du and
Chen 2001), which have the disadvantage that they may converge to a local optimum
especially if non-linear constraints are involved.
The following two methods do not need to calculate the MPP, thereby avoiding the
drawbacks of this method. Tvedt (1990) presented a numerical method which evaluates
the parabolic failure domain exactly by inversion of the characteristic function for the
parabolic quadratic form. In Kiureghian, Lin, and Hwang (1987) a technique is pro-
posed which computes the probability of failure by fitting a parabolic approximation to
the limit state surface.
For most approximation methods, it is assumed that all variables have been transformed
into independent standard normal ones, only then can the methods be applied. Breitung
(1991) proposed an approach which does not need such transformations. In this ap-
proach it is sufficient to maximize the log likelihood function of the probability distribu-
tion in the original space, then to approximate the function and limit-state function near
the maximum points by second-order Taylor expansions to obtain asymptotic approxi-
mations. The advantage is that no complex transformation is needed and the results can
be interpreted naturally since the random variables and the constraint functions are in
the variable space.
A new approach drawing increasing interest in the recent years is the use of single
loop methods which avoid the nested loops by converting the reliability analysis into a
deterministic problem.
Du and Chen (2002) proposed a method where optimization and reliability assessment
are decoupled from each other. No reliability assessment is required within optimiza-
tion and the reliability assessment is only conducted after the optimization. The key
10

2 Related Work
concept of the proposed method is to shift the boundaries of violated deterministic
constraints in the feasible direction based on the reliability information obtained in the
previous cycle. In Agarwal (2004) the inner loop optimization of the reliability analysis
is replaced by its corresponding first order Karush-Kuhn-Tucker necessary optimality
conditions. The author shows that this approach is more efficient than existing ap-
proaches and also capable of finding solutions close to the optimum. Another method
for reducing the computational burden is proposed in Gea and Oza (2006). Here a Two-
level approximation method is used in which, at the first level, a reduced second-order
approximation is used for the optimization of the solution and at the second level a
linear approximation is applied for the reliability assessment. The optimal solution is
obtained iteratively.
These systems do not find the real MPP but instead try to approximate it. Hence the
computational efficiency is improved significantly. However there is no guarantee that
the approximations are converging against the real reliability and so the optimal solution
may not fulfill the required reliability.
Another method which uses a two-point adaptive nonlinear approximation of the con-
straint function is proposed by Grandhi and Wang (1998). He constructs the approxima-
tion using the function values and the first-order gradients at two points of the limit-state
function.
The application of reliability-based design methods to automotive structure design is
not new and has already been studied for several years (Yang, Tseng, Nagy, and Cheng
1994; Schramm, Schneider, and Thomas 1999). The computation time of those large
scale optimization problems took sometimes more than a month which was reduced to
days, within the last years, by the efficient use of shared memory multiprocessor sys-
tems (Sobieski, Kodiyalam, and Yang 2000; Kodiyalam and Tho 2001). An overview
of the RBDO methods for automotive structures is given by Gu and Yang (2006), who
also include sampling techniques, non-linear response surface methodologies, robust
11

Details

Seiten
Erscheinungsform
Originalausgabe
Jahr
2007
ISBN (eBook)
9783836618281
Dateigröße
1.1 MB
Sprache
Englisch
Institution / Hochschule
Karlsruher Institut für Technologie (KIT) – Fakultät für Wirtschaftswissenschaften, Wirtschaftsingenieurwesen
Erscheinungsdatum
2014 (April)
Note
1,0
Schlagworte
evolutionary algorithms genetic optimization multi reliability
Zurück

Titel: Reliability-Based Optimization für Multiple Constraints with Evolutionary Algorithms
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
101 Seiten
Cookie-Einstellungen