Ghahari, Farid G.; Sargsyan, Khachik S.; Celebi, Mehmet C.; Taciroglu, Ertugrul T.
The use of simple models for response prediction of building structures is preferred in earthquake engineering for risk evaluations at regional scales, as they make computational studies more feasible. The primary impediment in their gainful use presently is the lack of viable methods for quantifying (and reducing upon) the modeling errors/uncertainties they bear. This study presents a Bayesian calibration method wherein the modeling error is embedded into the parameters of the model. Here, the method is specifically described for coupled shear-flexural beam models here, but it can be applied to any parametric surrogate model. The major benefit the method offers is the ability to consider the modeling uncertainty in the forward prediction of any degree-of-freedom or composite response regardless of the data used in calibration. The method is extensively verified using two synthetic examples. In the first example, the beam model is calibrated to represent a similar beam model but with enforced modeling errors. In the second example, the beam model is used to represent the detailed finite element model of a 52-story building. Both examples show the capability of the proposed solution to provide realistic uncertainty estimation around the mean prediction.
A Bayesian inference strategy has been used to estimate uncertain inputs to global impurity transport code (GITR) modeling predictions of tungsten erosion and migration in the linear plasma device, PISCES-A. This allows quantification of GITR output uncertainty based on the uncertainties in measured PISCES-A plasma electron density and temperature profiles (ne, Te) used as inputs to GITR. The technique has been applied for comparison to dedicated experiments performed for high (4 × 1022 m–2 s–1) and low (5 × 1021 m–2 s–1) flux 250 eV He–plasma exposed tungsten (W) targets designed to assess the net and gross erosion of tungsten, and corresponding W impurity transport. The W target design and orientation, impurity collector, and diagnostics, have been designed to eliminate complexities associated with tokamak divertor plasma exposures (inclined target, mixed plasma species, re-erosion, etc) to benchmark results against the trace impurity transport model simulated by GITR. The simulated results of the erosion, migration, and re-deposition of W during the experiment from the GITR code coupled to materials response models are presented. Specifically, the modeled and experimental W I emission spectroscopy data for a 429.4 nm line and net erosion through the target and collector mass difference measurements are compared. Furthermore, the methodology provides predictions of observable quantities of interest with quantified uncertainty, allowing estimation of moments, together with the sensitivities to plasma temperature and density.
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.1.2 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
Flooding impacts are on the rise globally, and concentrated in urban areas. Currently, there are no operational systems to forecast flooding at spatial resolutions that can facilitate emergency preparedness and response actions mitigating flood impacts. We present a framework for real-time flood modeling and uncertainty quantification that combines the physics of fluid motion with advances in probabilistic methods. The framework overcomes the prohibitive computational demands of high-fidelity modeling in real-time by using a probabilistic learning method relying on surrogate models that are trained prior to a flood event. This shifts the overwhelming burden of computation to the trivial problem of data storage, and enables forecasting of both flood hazard and its uncertainty at scales that are vital for time-critical decision-making before and during extreme events. The framework has the potential to improve flood prediction and analysis and can be extended to other hazard assessments requiring intense high-fidelity computations in real-time.
A new method for computing anharmonic thermophysical properties for adsorbates on metal surfaces is presented. Classical Monte Carlo phase space integration is performed to calculate the partition function for the motion of a hydrogen atom on Cu(111). A minima-preserving neural network potential energy surface is used within the integration routine. Two different sampling schema for generating the training data are presented, and two different density functionals are used. The results are benchmarked against direct state counting results by using discrete variable representation. The phase space integration results are in excellent quantitative agreement with the benchmark results. Additionally, both the discrete variable representation and the phase space integration results confirm that the motion of H on Cu(111) is highly anharmonic. The results were applied to calculate the free energy of dissociative adsorption of H2 and the resulting Langmuir isotherms at 400, 800, and 1200 K in a partial pressure range of 0-1 bar. It shows that the anharmonic effects lead to significantly higher predicted surface site fractions of hydrogen.
We present a new geodesic-based method for geometry optimization in a basis set of redundant internal coordinates. Our method updates the molecular geometry by following the geodesic generated by a displacement vector on the internal coordinate manifold, which dramatically reduces the number of steps required to converge to a minimum. Our method can be implemented in any existing optimization code, requiring only implementation of derivatives of the Wilson B-matrix and the ability to numerically solve an ordinary differential equation.
Kreitz, Bjarne K.; Sargsyan, Khachik S.; Mazeau, Emily J.; Blondal, Katrin B.; West, Richard H.; Wehinger, Gregor W.; Turek, Thomas T.; Goldsmith, Franklin G.
Automatic mechanism generation is used to determine mechanisms for the CO2 hydrogenation on Ni(111) in a two-stage process while considering the correlated uncertainty in DFT-based energetic parameters systematically. In a coarse stage, all the possible chemistry is explored with gas-phase products down to the ppb level, while a refined stage discovers the core methanation submechanism. Five thousand unique mechanisms were generated, which contain minor perturbations in all parameters. Global uncertainty assessment, global sensitivity analysis, and degree of rate control analysis are performed to study the effect of this parametric uncertainty on the microkinetic model predictions. Comparison of the model predictions with experimental data on a Ni/SiO2 catalyst find a feasible set of microkinetic mechanisms within the correlated uncertainty space that are in quantitative agreement with the measured data, without relying on explicit parameter optimization. Global uncertainty and sensitivity analyses provide tools to determine the pathways and key factors that control the methanation activity within the parameter space. Together, these methods reveal that the degree of rate control approach can be misleading if parametric uncertainty is not considered. The procedure of considering uncertainties in the automated mechanism generation is not unique to CO2 methanation and can be easily extended to other challenging heterogeneously catalyzed reactions.
We demonstrate a Bayesian method for the “real-time” characterization and forecasting of partially observed COVID-19 epidemic. Characterization is the estimation of infection spread parameters using daily counts of symptomatic patients. The method is designed to help guide medical resource allocation in the early epoch of the outbreak. The estimation problem is posed as one of Bayesian inference and solved using a Markov chain Monte Carlo technique. The data used in this study was sourced before the arrival of the second wave of infection in July 2020. The proposed modeling approach, when applied at the country level, generally provides accurate forecasts at the regional, state and country level. The epidemiological model detected the flattening of the curve in California, after public health measures were instituted. The method also detected different disease dynamics when applied to specific regions of New Mexico.
Basis adaptation in Homogeneous Chaos spaces rely on a suitable rotation of the underlying Gaussian germ. Several rotations have been proposed in the literature resulting in adaptations with different convergence properties. In this paper we present a new adaptation mechanism that builds on compressive sensing algorithms, resulting in a reduced polynomial chaos approximation with optimal sparsity. The developed adaptation algorithm consists of a two-step optimization procedure that computes the optimal coefficients and the input projection matrix of a low dimensional chaos expansion with respect to an optimally rotated basis. We demonstrate the attractive features of our algorithm through several numerical examples including the application on Large-Eddy Simulation (LES) calculations of turbulent combustion in a HIFiRE scramjet engine.
Model error estimation remains one of the key challenges in uncertainty quantification and predictive science. For computational models of complex physical systems, model error, also known as structural error or model inadequacy, is often the largest contributor to the overall predictive uncertainty. This work builds on a recently developed framework of embedded, internal model correction, in order to represent and quantify structural errors, together with model parameters,within a Bayesian inference context. We focus specifically on a Polynomial Chaos representation with additive modification of existing model parameters, enabling a non-intrusive procedure for efficient approximate likelihood construction, model error estimation, and disambiguation of model and data errors’ contributions to predictive uncertainty. The framework is demonstrated on several synthetic examples, as well as on a chemical ignition problem.
Rate coefficients are key quantities in gas phase kinetics and can be determined theoretically via master equation (ME) calculations. Rate coefficients characterize how fast a certain chemical species reacts away due to collisions into a specific product. Some of these collisions are simply transferring energy between the colliding partners, in which case the initial chemical species can undergo a unimolecular reaction: dissociation or isomerization. Other collisions are reactive, and the colliding partners either exchange atoms, these are direct reactions, or form complexes that can themselves react further or get stabilized by deactivating collisions with a bath gas. The input of MEs are molecular parameters: geometries, energies, and frequencies determined from ab initio calculations. While the calculation of these rate coefficients using ab initio data is becoming routine in many cases, the determination of the uncertainties of the rate coefficients are often ignored, sometimes crudely assessed by varying independently just a few of the numerous parameters, and only occasionally studied in detail. In this study, molecular frequencies, barrier heights, well depths, and imaginary frequencies (needed to calculate quantum mechanical tunneling) were automatically perturbed in an uncorrelated fashion. Our Python tool, MEUQ, takes user requests to change all or specified well, barrier, or bimolecular product parameters for a reaction. We propagate the uncertainty in these input parameters and perform global sensitivity analysis in the rate coefficients for the ethyl + O2 system using state-of-the-art uncertainty quantification (UQ) techniques via Python interface to UQ Toolkit (www.sandia.gov/uqtoolkit). A total of 10,000 sets of rate coefficients were collected after perturbing 240 molecular parameters. With our methodology, sensitive mechanistic steps can be revealed to a modeler in a straightforward manner for identification of significant and negligible influences in bimolecular reactions.
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. The method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function as a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.