Zur Hauptnavigation / To main navigation

Zur Sekundärnavigation / To secondary navigation

Zum Inhalt dieser Seite / To the content of this page

Hauptnavigation / Main Navigation

Sekundärnavigation / Secondary navigation


Inhaltsbereich / Content

<< November 2015

Veranstaltungskalender Dezember 2015

Januar 2016 >>

2 Einträge gefunden

  • 07. Dezember
    11:30 - 13:00
    Ort: 32-349

    Dr. Torsten Bosse, Argonne National Laboratory: An asynchronous Oneshot method with load-balancing

    We propose a ‘blurred’ one-shot method for the minimization of design optimization problems using a parallel and asynchronous evaluation of the three basic tasks: the state-, the adjoint- and the control-update. In particular, for each task it is allowed to always use and/or override the latest information of another task, i.e., rather than waiting until the fixed-point iteration provides a new state update it is assumed that parts of the corresponding adjoint iteration already use the latest information from the simulation code. Naturally, this cross-communication between the three tasks will lead to inconsistencies and any mathematical convergence theory for such an approach is far from being obvious.

    Nevertheless, one can expect convergence of the method in certain cases. The key for the success of such a method relies on an optimal distribution of the different tasks for a given amount of available resources on a igh performance cluster. This assignment problem yields a possibility to influence the contraction rates of the primal and dual updates as well as the change in the control variables. Also, it can be be shown that the blurred Oneshot algorithm is a generalization of the previously presented Jacobi- and (Multistep-) Seidel-Oneshot method, which can be recovered by a suitable allocation of resources. The blurred method can be applied on (discretized) optimal control problems, which might also include unsteady PDEs.

  • 10. Dezember
    11:30 - 13:00
    Ort: 32-349

    Prof. D. Quagliarella, Italian Aerospace Research Center: Risk measures and optimization under uncertainty

    Many industrial optimization processes must take account of the stochastic nature of the system and processes to be designed or re-designed and have to consider the random variability of some of the parameters that describe them. Thus it is necessary to characterize the system that is being studied from various points of view related to the treatment of uncertainty. This talk is related to the use of various risk measures in the context of robust and reliability based optimization. We start from the definition of risk measure and its formal setting and then we show how different risk functiontional definitions can lead to different approaches to the problem of optimization under uncertainty. In particular, the application of value-at-risk (VaR) and conditional value-at-risk (CVaR), also called quantiles and superquantiles, is here illustrated. These risk measures originated in the area of financial engineering, but they are very well and naturally suited to reliability-based design optimization problems and they represent a possible alternative to more traditional robust design approaches. We will then discuss the implementation of an efficient risk-measure based optimization algorithm based on the introduction of the Weighted Empirical Cumulative Distribution Function (WECDF) and on the use of methods for changing the probability measure. Finally we will discuss the problems related to the error in the estimation of the risk function and we will illustrate the “bootstrap” computational statistics technique to get an estimate of the standard error on VaR and CVaR. Finally, we will report some simple application examples of this approach to robust and reliability based optimization.