Shanghai University
Article Information
- Jun HU, Shudao ZHANG
- Global sensitivity analysis based on high-dimensional sparse surrogate construction
- Applied Mathematics and Mechanics (English Edition), 2017, 38(6): 797-814.
- http://dx.doi.org/10.1007/s10483-017-2208-8
Article History
- Received Apr. 20, 2016
- Revised Sep. 25, 2016
Sensitivity analysis (SA) investigates the relative contribution of the uncertainty of the model inputs to the uncertainty in model outputs. Thus it is an important tool in model calibration, verification, and simplification. The SA is usually classified into local sensitivity analysis (LSA) and global sensitivity analysis (GSA). The LSA only considers uncertainty stemming from input variations around a specific point, while the GSA considers variations of the inputs within their entire feasibility space. Obviously, the GSA plays a more important role in model simplification, importance ranking and risk reduction. Moreover, Saltelli et al.[1] (2008) defined three specific purposes (settings) of GSA. The first setting is described as the factor priorization (FP), which aims at ranking the inputs in terms of their relative contribution to output uncertainty. The second setting is used for model simplification and described as the factor fixing (FF), or screening, which aims at determining the inputs that do give little contribution to output uncertainty. Moreover, the third setting named the factor mapping (FM) aims at determining the regions in the input space that produces specific output values, for instance, above a prescribed threshold.
The effective realization of FP and FF needs some reliable definitions of sensitivity indices (or importance measures). Saltelli[2] pointed out that good sensitivity indices should be global, quantitative, and model free. So far, there exist many GSA indices to be defined, which are mainly classified into two categories:
(ⅰ) Variance-based SA (the Sobol method), namely, to compute Sobol indices[3-4] by decomposing the variance of output into the contributions of each or combinations of inputs;
(ⅱ) The moment-independent or density-based sensitivity method, namely, to compute moment-independent (or delta) indices[5] by calculating the shift in the probability density function (PDF) of model output considering that the input is fixed.
To estimate Sobol indices for variance-based SA, Monte Carlo sampling based methods were developed by Sobol[3] for first-order and interaction indices and Saltelli[6] for first-order and total indices. Unfortunately, to get precise estimates of sensitivity indices, these methods are costly in terms of the number of model calls (the rate of convergence in
The computation of Sobol or delta indices by the Monte Carlo method requires a large number of model evaluations. Meta-models (or surrogate model) such as Gaussian processes[8] or polynomial chaos (PC) expansions[9] are good alternatives for estimation of sensitivity indices, because they are simpler to evaluate than the original model. However, the use of meta-models will usually produce both the model approximation error and Monte Carlo sampling error which can reduce the accuracy in sensitivity indices greatly. Fortunately, PC methods[10] prevailing in the field of uncertainty quantification[11] can obtain the Sobol indices directly by the computation of coefficients of the PC expansions[9]. With the increase in the input dimension or the order degree, PC methods would face the curse of dimensionality problem which even needs more sample points than the Monte Carlo method. To alleviate this high-dimensional problem, sparse PC methods are first proposed by Blatman and Sudret[12-14] and then adopted for the computation of global Sobol sensitivity indices. Their sparse reconstruction algorithms mainly come from the variable selection techniques in statistic, including the stepwise regression and least angle regression.
In fact, there still exist a great number of alternative sparse reconstruction algorithms which can be used for the construction of sparse PC surrogate model and then GSA. In this paper, three standard sparse reconstruction techniques (orthogonal matching pursuit (OMP), spectral projected gradient for L1 minimization (SPGL1), and Bayesian compressive sensing (BCS) with Laplace priors) are investigated and compared for the computation of Sobol sensitivity indices through sparse PC expansions for three benchmark problems. In Section 2, all theoretical formulations with respect to the GSA based on the sparse surrogate construction are described in each subsection. Subsection 2.1 gives the definition of Sobol global sensitivity indices, Subsections 2.2 and 2.3 briefly describe the PC method and sparse PC method, respectively, while Subsection 2.4 presents three sparse reconstruction algorithms mentioned above. In Section 3, numerical examples based on three benchmark response models are given to validate and compare the three sparse reconstruction algorithms in the effective implementations for the variance-based GSA through sparse PC methods. Finally, in Section 4, a conclusion is given to summarize the main contents of this paper.
2 Theoretical formulations 2.1 Sobol global sensitivity indicesAny model may be viewed as a function Y = f(X), where X is a vector of d uncertain model inputs X1, X2, …, Xd (independently and uniformly distributed within the unit hypercube [-1, 1]d). f(X) may be decomposed in the following way (Sobol[3], 1993) :
![]() |
(1) |
which is called the Sobol (or the analysis of variance) decomposition and should be satisfied by the condition as follows:
![]() |
(2) |
This finally leads to the decomposition of variance expression,
![]() |
(3) |
which can be easily used to define Sobol global sensitivity indices as follows:
(ⅰ) The main effect index Si is denoted by[3]
![]() |
(4) |
Here, EX~i denotes the expectation of Y with respect to all uncertain inputs but Xi. Higher-order interaction indices Sij, Sijk and so on can be formed by dividing other terms in the variance decomposition Var(Y). Note that this has the implication that
![]() |
(5) |
(ⅱ) The total effect index STi can be written as[4]
![]() |
(6) |
Obviously, Sobol indices can be computed by the Monte Carlo method which is dimension independent but converges slowly. The concrete formulation of Monte Carlo method is given by
![]() |
(7) |
![]() |
(8) |
where A and B are two (N × d) random matrices sampling from uniform distribution, the matrix ABi is produced by replacing the ith column of the matrix A with the ith column of the matrix B, and f(A)j is represented as the function response of the jth row of A.
2.2 PC methodThe PC method not only is an efficient approach of uncertainty quantification[10-11], but also can be used to perform the variance-based GSA with respect to Sobol indices[9]. The theoretical basis of the PC method is based on Cameron and Martin's findings[15] that a PC expansion does converge to any L2 functional in the L2 sense. First, any model output Y = f(X) (assumed to be any L2 functional) can be expanded into the generalized PC expansion in terms of uniform random inputs X as follows:
![]() |
(9) |
Here, α = [α1, …, αd] is the multi-index, and παii(Xi) are normalized Legendre polynomials associated with the uniform PDF. The mean and variance of the model output can be directly computed by PC expansion coefficients,
![]() |
(10) |
which are the representative quantities of uncertainty quantification. For the variance-based GSA, the PC expansion can be written as the Sobol (or the analysis of variance) decomposition with
![]() |
(11) |
where the set Ii1, …, is is defined as
![]() |
(12) |
Here, Λ denotes a multi-index set, i.e., a collection of PC basis. Then, variance-based sensitivity indices (Sobol indices) can also be obtained by PC expansion coefficients,
![]() |
(13) |
![]() |
(14) |
As mentioned by Sudret[9], the computation for expansion coefficients can be realized through two kinds of non-intrusive approaches as follows:
(ⅰ) The projection approach (multidimensional integral problem) Based on the orthogonality of the PC basis, the expansion coefficients can be computed through
![]() |
(15) |
Here, Ξd denotes the d-dimensional unit hypercube [-1,1]d for the multidimensional integral region, and dX stands for dX1, …, dXd. The multidimensional integral can be computed by the crude Monte Carlo simulation[16], Latin hypercube sampling[17], and QMC sequence[18]. However, in order to get a sufficient accuracy with this integral scheme, the number of samples should be very large. In case that the response Y is obtained by a computationally demanding finite element model, this approach is practically not applicable. An alternative approach for the multidimensional integral is to use the Gaussian quadrature scheme. Specifically, for the tensor product quadrature, the expansion coefficients can be calculated as a weighted summation of the integrands evaluated at tensorial integration points, i.e.,
![]() |
(16) |
Here, ξij and ωij with ij=1, …, K are the integration points and their integration weights in each dimension with j=1, …, d, respectively. Obviously, the Smolyak sparse grid integration[19] can also be used to compute the expansion coefficients in replace of the tensor product quadrature with much fewer integration points.
(ⅱ) The regression approach (least square minimization problem) If a set of N regression points are selected in the input space and denoted by {ξ1, …, ξN}, the least square minimization can be used to compute the expansion coefficients, namely,
![]() |
(17) |
where Yj = f(ξj), and ψij = ψi(ξj). The solution to the least square minimization can be easily obtained by
![]() |
(18) |
Here, ψψT is the information matrix whose inverse should be solved by the singular value decomposition due to its usual ill-condition.
2.3 Sparse PC methodIf the collection of PC basis is selected with a full degree order p, i.e., the multi-index set is chosen as
![]() |
(19) |
then the cardinality of Λp, d, i.e., the number of PC expansion coefficients would be
![]() |
(20) |
When the dimension d or the order degree p increases, the number of expansion coefficients increases exponentially as shown in Fig. 1. Thus, the traditional PC method is confronted with the "curse of dimensionality" coming from high-dimensional input spaces, which needs a great number of sample points for the computation of Sobol indices with either the projection approach or the least square regression method. In order to alleviate the "curse of dimensionality", the conception of sparse PC expansions is first proposed by Blatman and Sudret[12-14]. They initially adopted a stepwise regression procedure to select the significant coefficients of the PC expansion. Soon they used a more efficient variable selection technique in statistic, namely, the least angle regression, to adaptively build up a sparse PC approximation at a reduced computational cost. Their sparse PC methods were demonstrated to be an efficient technique for the variance-based GSA[14]. However, it is noted that the sparse representation of PC in fact can be realized through the sparse reconstruction technique of compressive sensing, which is well applied in the field of signal and image processing[20]. Compressive sensing based sparse surrogate methods[21-23] have been applied to the field of uncertainty quantification for the high-dimensional problem.
![]() |
Fig. 1 Number of PC expansion coefficients as function of dimension d with different values of order degree p |
|
In order to obtain the sparse dominant coefficients of PC expansions, the problem is to find a solution for expansion coefficients with the minimum number of non-zeros, i.e.,
![]() |
(21) |
where ||c||0 := #{α : cα ≠ 0}. By relaxing the equality constraint to allow some error tolerance
![]() |
(22) |
As mentioned by Tropp and Wright[24], there are at least five major classes of computational techniques for solving the above sparse approximation problem, three of which will be used in this paper and are listed as follows:
(ⅰ) Greedy pursuit Iteratively refine a sparse solution by successively identifying one or more components that yield the greatest improvement in quality[25]. Obviously, the adaptive stepwise regression procedure proposed by Blatman and Sudret[12] for the sparse PC belongs to this kind. There exist some effective greed methods for L0 minimization such as OMP[26-27] and compressive sampling matching pursuit (CoSaMP)[28].
(ⅱ) Convex relaxation Replace the combinatorial problem (NP-hard problem) with a convex optimization problem such as the L1 minimization problem[29-30]. There exist a lot of L1 minimization algorithms such as the homotopy method[31], SPGL1 method[32] and iteratively reweighted least squares (IRLS)[33]. The least angle regression[34] algorithm is a simple modification of the homotopy method which is also called the modified least angle regression method.
(ⅲ) Bayesian framework Assume a prior distribution for the unknown coefficients that favors sparsity. Develop a maximum a posteriori estimator that incorporates the observation[35-38].
These sparse approximation schemes can normally be applied to the construction of sparse PC expansions. However, they have their pros and cons. For example, the homotopy method is more suitable when the solution is very sparse. Bayesian methods are based on sound principles. However, they do not offer strict theoretical guarantees for sparse recovery.
2.4 Sparse reconstruction algorithmIn this section, we present briefly three sparse reconstruction algorithms (OMP, SPGL1 and BCS with Laplace priors) which are very efficient methods and suitable for high-dimensional sparse signal. They will be adopted for the construction of sparse PC expansions in this paper.
2.4.1 OMPThe OMP is an iterative greedy algorithm that selects the column at each step which is most correlated with the current residuals. This column is then added into the set of selected columns. The algorithm updates the residuals by projecting the observation onto the linear subspace spanned by the columns that have already been selected, and the algorithm then iterates. Compared with other alternative methods, a major advantage of the OMP is its simplicity and fast implementation. The OMP algorithm for the sparse PC expansion is formally described as follows:
(ⅰ) Input: the measurement matrix ψT and the measurement vector Y are introduced.
(ⅱ) Initialization: S0=∅, and c0 = 0.
(ⅲ) Iteration: repeat until a stopping criterion is met at n = n,
![]() |
(ⅳ) Output: the n-sparse vector c♯ = cn.
Obviously, a weakness of the OMP algorithm is that, once an incorrect index has been selected in a target support Sn, it remains in all the subsequent target support Sn' for n' > n. A sufficient condition for recovering a k-sparse signal exactly in the noiseless case by the OMP method should satisfy the mutual incoherence property which requires the mutual incoherence μ to be small, i.e.,
![]() |
(23) |
The optimization problem of L1 minimization admits a unique solution that coincides with the unique solution to the L0 minimization problem for sufficiently sparse c and with some constraints such as restricted isometry property conditions[39-40] on the measurement matrix ψT. Then, we need to solve an alternative L1 minimization problem to obtain the sparse approximation of PC expansions as follows:
![]() |
(24) |
The L1 minimization problem is also called the basis pursuit problem which also allows to relax the constraint to some error tolerance
![]() |
(25) |
The SPGL1 method[32] treats the BPDN problem by solving a sequence of instances of the least absolute shrinkage and selection operator (LASSO) problem,
![]() |
(26) |
for varying values of the parameters τ. By denoting the optimal solution of the LASSO problem by cτ, the corresponding residual norm is defined as
![]() |
(27) |
which provides a parametrization of the Pareto curve. Now the BPDN problem is changed to find a solution
![]() |
(28) |
which can be solved by Newton-type methods. The intermediate LASSO problem for τ = τk is solved via the spectral projected gradient method[41].
2.4.3 BCS with Laplace priorsIn Bayesian modeling, all unknowns are treated as stochastic quantities with assigned probability distributions. Under the measurement matrix ψT, the following linear system is considered:
![]() |
(29) |
where the unknown coefficient vector c is assigned with a prior distribution p(c | λ), and the unknown truncation error is assumed as the random observation noise and treated as the zero mean independent Gaussian distribution. Thus, the likelihood function (or observation model) is formulated as
![]() |
(30) |
where β is the inverse noise variance. As mentioned by Wipf and Rao[36], the L1 minimization is equivalent to using a Laplace prior,
![]() |
(31) |
and using a maximum a posteriori formulation on the unknown coefficients. However, this Laplace prior is not conjugate to the likelihood function and does not allow for a tractable Bayesian analysis. An alternative approach is to adopt separate Gaussian priors, and the resulting Bayesian formulation is the well-known relevance vector machine[35-36] (or sparse Bayesian learning[36]). For Laplace priors promote sparsity to much larger extent than Gaussian priors, a hierarchical Laplace prior model is proposed by Babacan et al.[38] as follows:
![]() |
(32) |
![]() |
(33) |
By combining the likelihood function and the hierarchical prior model, the joint distribution can be given as
![]() |
(34) |
where the Gamma distribution is used for the priors of p(λ) and p(β). Then, the Bayesian inference based posterior distribution is given by
![]() |
(35) |
However, the posterior is intractable, since the marginalization p(Y) cannot be calculated analytically. Thus, the evidence procedure (type-Ⅱ maximum likelihood approach) can be adopted to perform Bayesian inference. Furthermore, the fast marginal likelihood maximization[42] should be used for fast estimation of γ, λ, and β. Finally, the posterior distribution for c is given by a multivariate Gaussian distribution,
![]() |
(36) |
with parameters
![]() |
(37) |
In this section, three standard benchmark functions (or problems) are used to validate the effectiveness and accuracy of the sparse PC method for the variance-based GSA through the three sparse reconstruction algorithms introduced in the last section. The variance-based GSA is described through three basic steps as follows:
(ⅰ) Sampling the inputs within their variability space, QMC sampling[43] is used in this paper, and different sampling strategies for the sparse surrogate with compressed sensing can be referred to a recent review given by Narayan and Zhou[44].
(ⅱ) Construct the sparse surrogate model, i.e., the sparse PC expansion against the input sample points.
(ⅲ) Post-process the sparse surrogate model, i.e., the input/output relationship, to compute the Sobol SA.
3.1 Sobol functionThe analytical g-function or Sobol function attributed to Sobol (1990) [3, 45] is used for our first example to perform the variance-based SA,
![]() |
(38) |
where the input variables xi (i=1, 2, …, d) have uniform distributions each defined over [0, 1], and ai are nonnegative parameters. This function is widely used as a test function in the SA because it is strongly nonlinear, non-monotonic, and non-smooth, and all its interaction terms are nonzero by definition. Moreover, it is possible to compute analytically the output variance decomposition and the variance-based sensitivity indices Si and STi. The analytical representation of the variance D of Y and the Sobol indices are given by
![]() |
(39) |
For our example case, the dimension d=8 and the degree order p=8 are selected together with a=[1, 2, 5, 10, 20, 50, 100, 500]. The number of full PC basis is P=12 870 which is too large to use the traditional regression method or the projection approach. Then, the sparse PC method is performed on a sequence of QMC sampling points with the sample number N=450. We first use the OMP method to construct the sparse PC expansion and then to compute the Sobol sensitivity indices directly from the expansion coefficients. The computed main effect and total effect indices are compared with the analytical solutions (39), as shown in Fig. 2. It is easily seen that good agreement between computational and analytical Sobol indices is obtained except for a slight difference on S1 and ST1.
![]() |
Fig. 2 Computations of variance-based sensitivity indices of Sobol function by sparse PC method with OMP |
|
Similar computations of the Sobol sensitivity indices are performed with the SPGL1 method
![]() |
Fig. 3 Computations of variance-based sensitivity indices of Sobol function by sparse PC method with SPGL1 method ![]() |
|
![]() |
Fig. 4 Computations of variance-based sensitivity indices of Sobol function by sparse PC method with BCS using Laplace priors |
|
We further perform the computational efficiency analysis by giving the comparison of the run-time for the computations of Sobol indices with different sparse reconstruction algorithms, as shown in Table 1. It is clearly seen that the BCS algorithm used in this paper is most efficient, while the SPGL1 method needs the most run-time to obtain the Sobol indices.
![]() |
The BCS treats the PC expansion coefficients as random variables given by the multivariate Gaussian distribution (36). The error bars (i.e., the confidence interval) of Sobol indices can be quickly computed by a distance of two standard deviations above and below the mean value, as shown in Fig. 5. It is easily seen that larger indices have a wider confidence interval which obviously represents the uncertain region of the Sobol indices.
![]() |
Fig. 5 Error bars of variance-based sensitivity indices of Sobol function by sparse PC method with BCS using Laplace priors |
|
In order to verify the proposed sparse PC methods for high-dimensional problems, we now consider the Morris function[1] as follows:
![]() |
(40) |
where
![]() |
(41) |
and the input variables xi (i=1, 2, …, 20) are uniformly distributed over [0, 1]. The coefficients βi are assigned as follows:
![]() |
(42) |
The remaining coefficients are defined by β0 = 0, βi = (−1)i, and βij = (−1)i+j. The degree order p=4 is selected, and the number of candidate PC basis is P=10 626. The sample points are also selected by the QMC sampling with N=880. We first use the sparse PC method with the OMP scheme to compute the Sobol sensitivity indices. Due to non-existence of analytical solutions for Sobol indices, the crude QMC sampling method with 20 000 sample points is used to compute the Sobol indices for comparison with the computational results. It is clearly seen from Fig. 6 that good agreement is obtained except that the computed total effect indices from ST11 to ST20 are much higher than zero, while the results from the crude QMC are nearly zero. Also, we use the SPGL1 method to compute the Sobol indices as shown in Fig. 7. It is found that agreement is not so good, and especially there exist large differences for the computed total effect indices from ST5 to ST10 and from ST11 to ST20, which makes the relative sensitivity change for some input parameters. Finally, the BCS with Laplace priors is used to construct the sparse PC expansion for Morris function, and the computed Sobol indices are shown in Fig. 8. It is seen that most Sobol indices have smaller relative change between the computational and crude QMC solutions, which shows that the BCS is more robust than the other two approaches.
![]() |
Fig. 6 Computations of variance-based sensitivity indices of Morris function by sparse PC method with OMP |
|
![]() |
Fig. 7 Computations of variance-based sensitivity indices of Morris function by sparse PC method with SPGL1 method ![]() |
|
![]() |
Fig. 8 Computations of variance-based sensitivity indices of Morris function by sparse PC method with BCS using Laplace priors |
|
The Sod shock tube problem[46] is a classical Riemann problem of hyperbolic conservation laws for ideal gases in shock physics. The governing equations are one-dimensional inviscid Euler equations as follows:
![]() |
(43) |
![]() |
(44) |
![]() |
(45) |
where ρ is the density, u is the velocity along the x-direction, E = e + u2/2 is the total energy per unit mass, p is the pressure, and e is the specific internal energy. The equation of state for ideal gases is represented by
![]() |
(46) |
where γ is the ratio of specific heats. The Riemann problem is defined by two gases at initially uniform states on both sides of an interface. This describes an experimental shock tube, and the Riemann problem is also called the shock tube problem. Moreover, the Sod shock tube problem is defined with initial conditions as follows:
![]() |
(47) |
with γs = 1.4. When the interface is broken for the Sod problem at the time t=0, the gases will interact and generate left moving rarefaction waves, middle contact surface and right moving shock wave. All state variables change smoothly through rarefaction waves. However, shock waves and contact surfaces involve discontinuous jumps across the wave. The analytical solution of Sod shock tube problem can be obtained through an iterative procedure for computing the pressure in the middle region between the rarefaction wave and the shock wave. Then, we can use this discontinuous solution to study the GSA of the initial conditions pL, pR, ρL, ρR, and γ on quantities of interest such as the density field at the time t=0.15. The five uncertain inputs for the Sod problem are assumed to have uniform distribution around the nominal values with at most 10% relative departure, i.e.,
![]() |
(48) |
![]() |
(49) |
Here, the random variables xi (i=1, 2, …, 5) are uniformly distributed over [-1, 1]. The BCS using Laplace priors is used to construct the sparse PC expansions of output quantities of interest and to compute the output statistical information for forward uncertainty quantification and the Sobol indices for GSA.
In Fig. 9, the statistical mean and standard deviation of the density field at the time t= 0.15 are computed and compared between the sparse PC method with BCS and the QMC method. The sparse PC method adopts the degree order p=8 (P=1 287) and only 100 sample points, while the QMC method uses 50 000 sample points for full convergence. It is easily found good agreement for the statistical mean with these two approaches is obtained. However, for the standard deviations, there are large differences near both the contact discontinuity and the shock wave discontinuity. This phenomenon exhibits that the accuracy of sparse PC expansions will degrade for the discontinuity problem due to Gibbs oscillation. The comparison for the main effect and total effect indices is also performed as shown in Figs. 10 and 11. It is also seen that there exist large differences near the position of discontinuity especially for the shock wave discontinuity. It is also found that, for the QMC method, the main index S2 has the negative value near the shock wave, which violates the positive for any Sobol indices. This phenomenon shows the weakness of the QMC method to compute the Sobol indices. Then, the Sod shock tube problem exhibits once again the effectiveness and accuracy of the sparse PC method constructed by the BCS with Laplace priors for the variance-based GSA.
![]() |
Fig. 9 Statistical mean and standard deviation of density field at time t=0.15 which are compared between sparse PC method with BCS and QMC |
|
![]() |
Fig. 10 Main effect indices of density field at time t=0.15 which are compared between sparse PC method with BCS and QMC method |
|
![]() |
Fig. 11 Total effect indices of density field at time t=0.15 which are compared between sparse PC method with BCS and QMC method |
|
In this paper, sparse PC methods based on sparse reconstruction algorithms of OMP, SPGL1 method and BCS with Laplace priors are demonstrated as powerful techniques for the variance-based GSA of high-dimensional inputs. The sparse PC surrogate models can be constructed by much smaller sample points than the traditional approaches, then reduce greatly the "curse of dimensionality" coming from the high-dimensional inputs and high expansion order. Three sparse reconstruction algorithms are compared in the computations of variance-based sensitivity indices (i.e., Sobol indices) of Sobol function and Morris function which have eight and twenty input dimensions, respectively. It is found that the sparse surrogate model through BCS with Laplace priors is most efficient and robust for the variance-based GAS, while the OMP also has relatively good accuracy and fast implementation. Furthermore, because the BCS treats the PC expansion coefficients as random variables, the uncertain range (i.e., error bar) for each Sobol index can be easily obtained with relatively small computational cost. Finally, the sparse PC method with BCS is adopted for the GSA of Sod shock tube problem. It is revealed that the sparse PC method cannot avoid the accuracy degradation near both the contact discontinuity and the shock wave discontinuity due to the well-known Gibbs oscillation of spectral methods for discontinuous solutions.
[1] | Saltelli, A. , Ratto, M. , Andres, T. , Campolongo, F. , Cariboni, J. , Gatelli, D. , Saisana, M. , and Tarantola, S. Global Sensitivity Analysis: the Primer, Wiley, England (2008) |
[2] | Saltelli, A. Sensitivity analysis for importance assessment. Risk Analysis, 22, 579-590 (2002) doi:10.1111/risk.2002.22.issue-3 |
[3] | Sobol, I. M. Sensitivity estimates for nonlinear mathematical models. Mathematical Modeling and Computational Experiment, 1, 407-414 (1993) |
[4] | Homma, T. and Saltelli, A. Importance measures in global sensitivity analysis of model output. Reliability Engineering and System Safety, 52, 1-17 (1996) doi:10.1016/0951-8320(96)00002-6 |
[5] | Borgonovo, E. A new uncertainty importance measure. Reliability Engineering and System Safety, 92, 771-784 (2007) doi:10.1016/j.ress.2006.04.015 |
[6] | Saltelli, A. Making best use of model evaluations to compute sensitivity indices. Computer Physics Communication, 145, 280-297 (2002) doi:10.1016/S0010-4655(02)00280-1 |
[7] | Wei, P., Lu, Z., and Yuan, X. Monte Carlo simulation for moment-independent sensitivity analysis. Reliability Engineering and System Safety, 110, 60-67 (2013) doi:10.1016/j.ress.2012.09.005 |
[8] | Oakley, J. E. and O'Hagan, A. Probabilistic sensitivity analysis of complex models: a Bayesian approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66, 751-769, 66, 751-769 (2004) doi:10.1111/rssb.2004.66.issue-3 |
[9] | Sudret, B. Global sensitivity analysis using polynomial chaos expansions. Reliability Engineering and System Safety, 93, 964-979 (2008) doi:10.1016/j.ress.2007.04.002 |
[10] | Xiu, D. and Karniadakis, G.E. The Wiener-Askey polynomial chaos for stochastic differential equations.. SIAM Journal on Scientific Computing, 24, 619-644 (2002) doi:10.1137/S1064827501387826 |
[11] | Le Maitre, O. P. and Knio, O. M. Spectral Methods for Uncertainty Quantification: with Applica-tions to Computational Fluid Dynamics, Springer, Netherlands (2010) |
[12] | Blatman, G. and Sudret, B. An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Probabilistic Engineering Mechanics, 25, 183-197 (2010) doi:10.1016/j.probengmech.2009.10.003 |
[13] | Blatman, G. and Sudret, B. Efficient computation of global sensitivity indices using sparse polynomial chaos expansions. Reliability Engineering and System Safety, 95, 1216-1229 (1216) |
[14] | Blatman, G. and Sudret, B. Adaptive sparse polynomial chaos expansion based on least angle regression. Journal of Computational Physics, 230, 2345-2367 (2345) |
[15] | Cameron, R. and Martin, W. The orthogonal development of nonlinear functionals in series of Fourier-Hermite functionals. Annals of Mathematics, 48, 385-392 (1947) doi:10.2307/1969178 |
[16] | Field, R. V. Numerical methods to estimate the coefficients of the polynomial chaos expnsion. Proceedings of the 15th ASCE Engineering Mechanics Conference, ASCE, New York (2002) |
[17] | Choi, S. K., Grandhi, R.V., Canfield, R. A., and Pettit, C. Polynomial chaos expansion with Latin hypercube sampling for estimating response variability. AIAA Journal, 45, 1191-1198 (2004) |
[18] | Radović, I., Sobol, I. M., and Tichy, R. F. Quasi-Monte Carlo methods for numerical integration: comparison of different low discrepancy sequences. Monte Carlo Methods and Applications, 2, 1-14 (1996) doi:10.1515/mcma.1996.2.1.1 |
[19] | Smolyak, S. A. Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Mathematics Doklady, 4, 240-243 (1963) |
[20] | Foucart, S. and Rauhut, H. A Mathematical Introduction to Compressive Sensing, Springer, New York (2013) |
[21] | Doostan, A. and Owhadi, H. A non-adapted sparse approximation of PDEs with stochastic inputs. Journal of Computational Physics, 230, 3015-3034 (3015) |
[22] | Mathelin, L. and Callivan, K.A. A compressed sensing approach for partial differential equations with random input data. Communications in Computational Physics, 12, 919-954 (2012) doi:10.4208/cicp.151110.090911a |
[23] | Peng, J., Hampton, J., and Doostan, A. A weighted-minimization approach for sparse polynomial chaos expansions. Journal of Computational Physics, 267, 92-111 (2014) doi:10.1016/j.jcp.2014.02.024 |
[24] | Tropp, J. A. and Wright, S. J. Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98, 948-958 (2010) doi:10.1109/JPROC.2010.2044010 |
[25] | Mallat, S. and Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41, 3397-3415 (3397) |
[26] | Pati, Y. C., Rezaiifar, R., and Krishnaprasad, P.S. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. The 27th Annual Asilomar Conference on Signals. Systems and Computers, 1, 40-44 (1993) |
[27] | Davis, G., Mallat, S., and Avellaneda, M. Adaptive greedy approximation. Constructive Approx-imation, 13, 57-98 (1997) doi:10.1007/BF02678430 |
[28] | Needell, D. and Tropp, J.A. CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26, 301-321 (2009) doi:10.1016/j.acha.2008.07.002 |
[29] | Chen, S., Donoho, D., and Saunders, M. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20, 33-61 (1998) doi:10.1137/S1064827596304010 |
[30] | Chen, S., Donoho, D., and Saunders, M. Atomic decomposition by basis pursuit. SIAM Review, 43, 129-159 (2001) doi:10.1137/S003614450037906X |
[31] | Donoho, D. L. and Tsaig, Y. Fast solution of l1-norm minimization problems when the solution may be sparse. IEEE Transactions on Information Theory, 54, 4789-4812 (4789) |
[32] | Van den Berg, E. and Friedlander, M. Probing the Pareto frontier for basis pursuit solutions. SIAM Journal on Scientific Computing, 31, 890-912 (2008) |
[33] | Daubechies, I., DeVore, R., Fornasier, M., and Güntürk, C. S. Iteratively reweighted least squares minimization for sparse recovery. Communications on Pure and Applied Mathematics, 63, 1-38 (2010) doi:10.1002/cpa.v63:1 |
[34] | Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. Least angle regression. Annals of Statistics, 32, 407-499 (2004) doi:10.1214/009053604000000067 |
[35] | Tipping, M. E. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211-244 (2001) |
[36] | Wipf, D. and Rao, B. Sparse Bayesian learning for basis selection. IEEE Transactions on Signal Processing, 52, 2153-2164 (2153) |
[37] | Ji, S., Xue, Y., and Carin, L. Bayesian compressive sensing. IEEE Transactions on Signal Pro-cessing, 56, 2346-2356 (2346) |
[38] | Babacan, S. D., Molina, R., and Katsaggelos, A.K Bayesian compressive sensing using Laplace priors. IEEE Transactions on Image Processing, 19, 53-63 (2010) doi:10.1109/TIP.2009.2032894 |
[39] | Candès, E., Romberg, J., and Tao, T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52, 489-509 (2006) doi:10.1109/TIT.2005.862083 |
[40] | Donoho, D. L. Compressed sensing. IEEE Transactions on Information Theory, 52, 1289-1306 (1289) |
[41] | Birgin, E. G., Martínez, J.M., and Raydan, M. Inexact spectral projected gradient methods on convex sets. IMA Journal of Numerical Analysis, 23, 539-559 (2003) doi:10.1093/imanum/23.4.539 |
[42] | Tipping, M. and Faul, A. Fast marginal likelihood maximisation for sparse Bayesian models. Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Morgan Kaufmann Publishers, Florida (2003) |
[43] | Sobol, I. M. , Turchaninov, V. I. , Levitan, Y. L. , and Shukhman, B. V. Quasi-Random Sequence Generators, Russian Acamdey of Sciences, Moscow (1992) |
[44] | Narayan, A. and Zhou, T. Stochastic collocation methods on unstructured meshes. Computer Physics Communication, 18, 1-36 (2015) doi:10.4208/cicp.020215.070515a |
[45] | Sobol, I. M. Theorems and examples on high dimensional model representation. Reliability Engi-neering and System Safety, 79, 187-193 (2003) doi:10.1016/S0951-8320(02)00229-6 |
[46] | Sod, G. A. A Survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws. Journal of Computational Physics, 27, 1-31 (1978) doi:10.1016/0021-9991(78)90023-2 |