Abstract:
A spectral approach to Bayesian inference is presented. It is based on the idea of computing a series expansion of the likelihood function in terms of polynomials that are orthogonal with respect to the prior. Based on this spectral likelihood expansion, the posterior density and all statistical quantities of interest can be calculated semi-analytically. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

Abstract:
The aim of the present paper is to develop a strategy for solving reliability-based design optimization (RBDO) problems that remains applicable when the performance models are expensive to evaluate. Starting with the premise that simulation-based approaches are not affordable for such problems, and that the most-probable-failure-point-based approaches do not permit to quantify the error on the estimation of the failure probability, an approach based on both metamodels and advanced simulation techniques is explored. The kriging metamodeling technique is chosen in order to surrogate the performance functions because it allows one to genuinely quantify the surrogate error. The surrogate error onto the limit-state surfaces is propagated to the failure probabilities estimates in order to provide an empirical error measure. This error is then sequentially reduced by means of a population-based adaptive refinement technique until the kriging surrogates are accurate enough for reliability analysis. This original refinement strategy makes it possible to add several observations in the design of experiments at the same time. Reliability and reliability sensitivity analyses are performed by means of the subset simulation technique for the sake of numerical efficiency. The adaptive surrogate-based strategy for reliability estimation is finally involved into a classical gradient-based optimization algorithm in order to solve the RBDO problem. The kriging surrogates are built in a so-called augmented reliability space thus making them reusable from one nested RBDO iteration to the other. The strategy is compared to other approaches available in the literature on three academic examples in the field of structural mechanics.

Abstract:
In the field of structural reliability, the Monte-Carlo estimator is considered as the reference probability estimator. However, it is still untractable for real engineering cases since it requires a high number of runs of the model. In order to reduce the number of computer experiments, many other approaches known as reliability methods have been proposed. A certain approach consists in replacing the original experiment by a surrogate which is much faster to evaluate. Nevertheless, it is often difficult (or even impossible) to quantify the error made by this substitution. In this paper an alternative approach is developed. It takes advantage of the kriging meta-modeling and importance sampling techniques. The proposed alternative estimator is finally applied to a finite element based structural reliability analysis.

Abstract:
Computer simulation has become the standard tool in many engineering fields for designing and optimizing systems, as well as for assessing their reliability. To cope with demanding analysis such as optimization and reliability, surrogate models (a.k.a meta-models) have been increasingly investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging are two popular non-intrusive meta-modelling techniques. PCE surrogates the computational model with a series of orthonormal polynomials in the input variables where polynomials are chosen in coherency with the probability distributions of those input variables. On the other hand, Kriging assumes that the computer model behaves as a realization of a Gaussian random process whose parameters are estimated from the available computer runs, i.e. input vectors and response values. These two techniques have been developed more or less in parallel so far with little interaction between the researchers in the two fields. In this paper, PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal polynomials (PCE) approximates the global behavior of the computational model whereas Kriging manages the local variability of the model output. An adaptive algorithm similar to the least angle regression algorithm determines the optimal sparse set of polynomials. PC-Kriging is validated on various benchmark analytical functions which are easy to sample for reference results. From the numerical investigations it is concluded that PC-Kriging performs better than or at least as good as the two distinct meta-modeling techniques. A larger gain in accuracy is obtained when the experimental design has a limited size, which is an asset when dealing with demanding computational models.

Abstract:
Structural reliability methods aim at computing the probability of failure of systems with respect to some prescribed performance functions. In modern engineering such functions usually resort to running an expensive-to-evaluate computational model (e.g. a finite element model). In this respect simulation methods, which may require $10^{3-6}$ runs cannot be used directly. Surrogate models such as quadratic response surfaces, polynomial chaos expansions or kriging (which are built from a limited number of runs of the original model) are then introduced as a substitute of the original model to cope with the computational cost. In practice it is almost impossible to quantify the error made by this substitution though. In this paper we propose to use a kriging surrogate of the performance function as a means to build a quasi-optimal importance sampling density. The probability of failure is eventually obtained as the product of an augmented probability computed by substituting the meta-model for the original performance function and a correction term which ensures that there is no bias in the estimation even if the meta-model is not fully accurate. The approach is applied to analytical and finite element reliability problems and proves efficient up to 100 random variables.

Abstract:
Reliability-based design optimization (RBDO) has gained much attention in the past fifteen years as a way of introducing robustness in the process of designing structures and systems in an optimal manner. Indeed classical optimization (e.g. minimize some cost under mechanical constraints) usually leads to solutions that lie at the boundary of the admissible domain, and that are consequently rather sensitive to uncertainty in the design parameters. In contrast, RBDO aims at designing the system in a robust way by minimizing some cost function under reliability constraints. Thus RBDO methods have to mix optimization algorithms together with reliability calculations. The classical approach known as "double-loop" consists in nesting the computation of the failure probability with respect to the current design within the optimization loop. It is not applicable to industrial models (e.g. finite element models) due to the associated computational burden. In contrast, methods based on the approximation of the reliability (e.g. FORM) may not be applicable to real-world problems. In this context, an original method has been developed that tries to circumvent the abovementioned drawbacks of the existing approaches. It is based on the adaptive construction of a meta-model for the expensive-to-evaluate mechanical model, and on the subset simulation technique for the efficient and accurate computation of the failure probability and its sensitivities with respect to the design variables. The proposed methodology is briefly described in this paper before it is applied to the reliability-based design of an imperfect submarine pressure hull.

Abstract:
A meta-model (or a surrogate model) is the modern name for what was traditionally called a response surface. It is intended to mimic the behaviour of a computational model M (e.g. a finite element model in mechanics) while being inexpensive to evaluate, in contrast to the original model which may take hours or even days of computer processing time. In this paper various types of meta-models that have been used in the last decade in the context of structural reliability are reviewed. More specifically classical polynomial response surfaces, polynomial chaos expansions and kriging are addressed. It is shown how the need for error estimates and adaptivity in their construction has brought this type of approaches to a high level of efficiency. A new technique that solves the problem of the potential biasedness in the estimation of a probability of failure through the use of meta-models is finally presented.

Abstract:
The study makes use of polynomial chaos expansions to compute Sobol' indices within the frame of a global sensitivity analysis of hydro-dispersive parameters in a simplified vertical cross-section of a segment of the subsurface of the Paris Basin. Applying conservative ranges, the uncertainty in 78 input variables is propagated upon the mean lifetime expectancy of water molecules departing from a specific location within a highly confining layer situated in the middle of the model domain. Lifetime expectancy is a hydrogeological performance measure pertinent to safety analysis with respect to subsurface contaminants, such as radionuclides. The sensitivity analysis indicates that the variability in the mean lifetime expectancy can be sufficiently explained by the uncertainty in the petrofacies, \ie the sets of porosity and hydraulic conductivity, of only a few layers of the model. The obtained results provide guidance regarding the uncertainty modeling in future investigations employing detailed numerical models of the subsurface of the Paris Basin. Moreover, the study demonstrates the high efficiency of sparse polynomial chaos expansions in computing Sobol' indices for high-dimensional models.

Abstract:
Meta-models developed with low-rank tensor approximations are investigated for propagating uncertainty through computational models with high-dimensional input. Of interest are meta-models based on polynomial functions, because of the combination of simplicity and versatility they offer. The popular approach of polynomial chaos expansions faces the curse of dimensionality, meaning the exponential growth of the size of the candidate basis with the input dimension. By exploiting the tensor-product form of the polynomial basis, low-rank approximations drastically decrease the number of unknown coefficients, which therein grows only linearly with the input dimension. The construction of such approximations relies on the sequential updating of the polynomial coefficients along separate dimensions, which involves minimization problems of only small size. However, the specification of stopping criteria in the sequential updating of the coefficients and the selection of optimal rank and polynomial degrees remain open questions. In this paper, first, we shed light on the aforementioned issues through extensive numerical investigations. In the sequel, the newly-emerged meta-modeling approach is confronted with state-of-art methods of polynomial chaos expansions. The considered applications involve models of varying dimensionality, i.e. the deflections of two simple engineering structures subjected to static loads and the temperature in stationary heat conduction with spatially varying thermal conductivity. It is found that the comparative accuracy of the two approaches in terms of the generalization error depends on both the application and the size of the experimental design. Nevertheless, low-rank approximations are found superior to polynomial chaos expansions in predicting extreme values of model responses in cases when the two types of meta-models demonstrate similar generalization errors.

Abstract:
In the field of computer experiments sensitivity analysis aims at quantifying the relative importance of each input parameter (or combinations thereof) of a computational model with respect to the model output uncertainty. Variance decomposition methods leading to the well-known Sobol' indices are recognized as accurate techniques, at a rather high computational cost though. The use of polynomial chaos expansions (PCE) to compute Sobol' indices has allowed to alleviate the computational burden though. However, when dealing with large dimensional input vectors, it is good practice to first use screening methods in order to discard unimportant variables. The {\em derivative-based global sensitivity measures} (DGSM) have been developed recently in this respect. In this paper we show how polynomial chaos expansions may be used to compute analytically DGSMs as a mere post-processing. This requires the analytical derivation of derivatives of the orthonormal polynomials which enter PC expansions. The efficiency of the approach is illustrated on two well-known benchmark problems in sensitivity analysis.