Publications

Results 1–200 of 378
Skip to search filters

Modeling Fast Diffusion Processes in Time Integration of Stiff Stochastic Differential Equations

Communications on Applied Mathematics and Computation

Han, Xiaoying; Najm, H.N.

Numerical algorithms for stiff stochastic differential equations are developed using linear approximations of the fast diffusion processes, under the assumption of decoupling between fast and slow processes. Three numerical schemes are proposed, all of which are based on the linearized formulation albeit with different degrees of approximation. The schemes are of comparable complexity to the classical explicit Euler-Maruyama scheme but can achieve better accuracy at larger time steps in stiff systems. Convergence analysis is conducted for one of the schemes, that shows it to have a strong convergence order of 1/2 and a weak convergence order of 1. Approximations arriving at the other two schemes are discussed. Numerical experiments are carried out to examine the convergence of the schemes proposed on model problems.

More Details

Mathematical Foundations for Nonlocal Interface Problems: Multiscale Simulations of Heterogeneous Materials (Final LDRD Report)

D'Elia, Marta D.; Bochev, Pavel B.; Foster, John E.; Glusa, Christian A.; Gulian, Mamikon G.; Gunzburger, Max G.; Trageser, Jeremy T.; Kuhlman, Kristopher L.; Martinez, Mario A.; Najm, H.N.; Silling, Stewart A.; Tupek, Michael T.; Xu, Xiao X.

Nonlocal models provide a much-needed predictive capability for important Sandia mission applications, ranging from fracture mechanics for nuclear components to subsurface flow for nuclear waste disposal, where traditional partial differential equations (PDEs) models fail to capture effects due to long-range forces at the microscale and mesoscale. However, utilization of this capability is seriously compromised by the lack of a rigorous nonlocal interface theory, required for both application and efficient solution of nonlocal models. To unlock the full potential of nonlocal modeling we developed a mathematically rigorous and physically consistent interface theory and demonstrate its scope in mission-relevant exemplar problems.

More Details

Trajectory design via unsupervised probabilistic learning on optimal manifolds

Data-Centric Engineering

Safta, Cosmin S.; Sparapany, Michael J.; Grant, Michael J.; Najm, H.N.

Abstract

This article illustrates the use of unsupervised probabilistic learning techniques for the analysis of planetary reentry trajectories. A three-degree-of-freedom model was employed to generate optimal trajectories that comprise the training datasets. The algorithm first extracts the intrinsic structure in the data via a diffusion map approach. We find that data resides on manifolds of much lower dimensionality compared to the high-dimensional state space that describes each trajectory. Using the diffusion coordinates on the graph of training samples, the probabilistic framework subsequently augments the original data with samples that are statistically consistent with the original set. The augmented samples are then used to construct conditional statistics that are ultimately assembled in a path planning algorithm. In this framework, the controls are determined stage by stage during the flight to adapt to changing mission objectives in real-time.

More Details

Quantification of the effect of uncertainty on impurity migration in PISCES-A simulated with GITR

Nuclear Fusion

Younkin, T.Y.; Sargsyan, Khachik S.; Casey, Tiernan A.; Najm, H.N.; Canik, J.C.; Green, D.G.; Doerner, R.D.; Nishijima, D.N.; Baldwin, M.B.; Drobny, J.x.; Curreli, D.x.; Wirth, B.W.

A Bayesian inference strategy has been used to estimate uncertain inputs to global impurity transport code (GITR) modeling predictions of tungsten erosion and migration in the linear plasma device, PISCES-A. This allows quantification of GITR output uncertainty based on the uncertainties in measured PISCES-A plasma electron density and temperature profiles (ne, Te) used as inputs to GITR. The technique has been applied for comparison to dedicated experiments performed for high (4 × 1022 m–2 s–1) and low (5 × 1021 m–2 s–1) flux 250 eV He–plasma exposed tungsten (W) targets designed to assess the net and gross erosion of tungsten, and corresponding W impurity transport. The W target design and orientation, impurity collector, and diagnostics, have been designed to eliminate complexities associated with tokamak divertor plasma exposures (inclined target, mixed plasma species, re-erosion, etc) to benchmark results against the trace impurity transport model simulated by GITR. The simulated results of the erosion, migration, and re-deposition of W during the experiment from the GITR code coupled to materials response models are presented. Specifically, the modeled and experimental W I emission spectroscopy data for a 429.4 nm line and net erosion through the target and collector mass difference measurements are compared. Furthermore, the methodology provides predictions of observable quantities of interest with quantified uncertainty, allowing estimation of moments, together with the sensitivities to plasma temperature and density.

More Details

Geometry optimization speedup through a geodesic approach to internal coordinates

Journal of Chemical Physics

Hermes, Eric H.; Sargsyan, Khachik S.; Najm, H.N.; Zador, Judit Z.

We present a new geodesic-based method for geometry optimization in a basis set of redundant internal coordinates. Our method updates the molecular geometry by following the geodesic generated by a displacement vector on the internal coordinate manifold, which dramatically reduces the number of steps required to converge to a minimum. Our method can be implemented in any existing optimization code, requiring only implementation of derivatives of the Wilson B-matrix and the ability to numerically solve an ordinary differential equation.

More Details

Trajectory Optimization via Unsupervised Probabilistic Learning On Manifolds

Safta, Cosmin S.; Najm, H.N.; Grant, Michael J.; Sparapany, Michael J.

This report investigates the use of unsupervised probabilistic learning techniques for the analysis of hypersonic trajectories. The algorithm first extracts the intrinsic structure in the data via a diffusion map approach. Using the diffusion coordinates on the graph of training samples, the probabilistic framework augments the original data with samples that are statistically consistent with the original set. The augmented samples are then used to construct conditional statistics that are ultimately assembled in a path-planing algorithm. In this framework the controls are determined stage by stage during the flight to adapt to changing mission objectives in real-time. A 3DOF model was employed to generate optimal hypersonic trajectories that comprise the training datasets. The diffusion map algorithm identfied that data resides on manifolds of much lower dimensionality compared to the high-dimensional state space that describes each trajectory. In addition to the path-planing worflow we also propose an algorithm that utilizes the diffusion map coordinates along the manifold to label and possibly remove outlier samples from the training data. This algorithm can be used to both identify edge cases for further analysis as well as to remove them from the training set to create a more robust set of samples to be used for the path-planing process.

More Details

AEVmod – Atomic Environment Vector Module Documentation

Najm, H.N.; Yang, Yoona N.

This report outlines the mathematical formulation for the atomic environment vector (AEV) construction used in the aevmod software package. The AEV provides a summary of the geometry of a molecule or atomic configuration. We also present the formulation for the analytical Jacobian of the AEV with respect to the atomic Cartesian coordinates. The software provides functionality for both the AEV and AEV-Jacobian, as well as the AEV-Hessian which is available via reliance on the third party library Sacado.

More Details

The origin of CEMA and its relation to CSP

Combustion and Flame

Goussis, Dimitris A.; Im, Hong G.; Najm, H.N.; Paolucci, Samuel; Valorani, Mauro

There currently exist two methods for analysing an explosive mode introduced by chemical kinetics in a reacting process: the Computational Singular Perturbation (CSP) algorithm and the Chemical Explosive Mode Analysis (CEMA). CSP was introduced in 1989 and addressed both dissipative and explosive modes encountered in the multi-scale dynamics that characterize the process, while CEMA was introduced in 2009 and addressed only the explosive modes. It is shown that (i) the algorithmic tools incorporated in CEMA were developed previously on the basis of CSP and (ii) the examination of explosive modes has been the subject of CSP-based works, reported before the introduction of CEMA.

More Details

Using computational singular perturbation as a diagnostic tool in ODE and DAE systems: a case study in heterogeneous catalysis

Combustion Theory and Modelling

Diaz-Ibarra, Oscar H.; Kim, Kyungjoo K.; Safta, Cosmin S.; Zador, Judit Z.; Najm, H.N.

We have extended the computational singular perturbation (CSP) method to differential algebraic equation (DAE) systems and demonstrated its application in a heterogeneous-catalysis problem. The extended method obtains the CSP basis vectors for DAEs from a reduced Jacobian matrix that takes the algebraic constraints into account. We use a canonical problem in heterogeneous catalysis, the transient continuous stirred tank reactor (T-CSTR), for illustration. The T-CSTR problem is modelled fundamentally as an ordinary differential equation (ODE) system, but it can be transformed to a DAE system if one approximates typically fast surface processes using algebraic constraints for the surface species. We demonstrate the application of CSP analysis for both ODE and DAE constructions of a T-CSTR problem, illustrating the dynamical response of the system in each case. We also highlight the utility of the analysis in commenting on the quality of any particular DAE approximation built using the quasi-steady state approximation (QSSA), relative to the ODE reference case.

More Details

CSPlib - A Software Toolkit for the Analysis of Dynamical Systems and Chemical Kinetic Models

Diaz-Ibarra, Oscar H.; Kim, Kyungjoo K.; Safta, Cosmin S.; Najm, H.N.

CSPlib is an open source software library for analyzing general ordinary differential equation (ODE) systems and detailed chemical kinetic ODE systems. It relies on the computational singular perturbation (CSP) method for the analysis of these systems. The software provides support for: General ODE models (gODE model class) for computing source terms and Jacobians for a generic ODE system; TChem model (ChemElemODETChem model class) for computing source term, Jacobian, other necessary chemical reaction data, as well as the rates of progress for a homogenous batch reactor using an elementary step detailed chemical kinetic reaction mechanism. This class relies on the TChem [2] library; A set of functions to compute essential elements of CSP analysis (Kernel class). This includes computations of the eigensolution of the Jacobian matrix, CSP basis vectors and co-vectors, time scales (reciprocals of the magnitudes of the Jacobian eigenvalues), mode amplitudes, CSP pointers, and the number of exhausted modes. This class relies on the Tines library; A set of functions to compute the eigensolution of the Jacobian matrix using Tines library GPU eigensolver; A set of functions to compute CSP indices (Index Class). This includes participation indices and both slow and fast importance indices.

More Details

Effective construction of eigenvectors for a class of singular sparse matrices

Applied Mathematics Letters

Han, Xiaoying; Najm, H.N.

Fundamental results and an efficient algorithm for constructing eigenvectors corresponding to non-zero eigenvalues of matrices with zero rows and/or columns are developed. The formulation is based on the relation between eigenvectors of such matrices and the eigenvectors of their submatrices after removing all zero rows and columns. While being easily implemented, the algorithm decreases the computation time needed for numerical eigenanalysis, and resolves potential numerical eigensolver instabilities.

More Details

Explicit time integration of the stiff chemical Langevin equations using computational singular perturbation

Journal of Chemical Physics

Han, Xiaoying; Valorani, Mauro; Najm, H.N.

A stable explicit time-scale splitting algorithm for stiff chemical Langevin equations (CLEs) is developed, based on the concept of computational singular perturbation. The drift term of the CLE is projected onto basis vectors that span the fast and slow subdomains. The corresponding fast modes exhaust quickly, in the mean sense, and the system state then evolves, with a mean drift controlled by slow modes, on a random manifold. The drift-driven time evolution of the state due to fast exhausted modes is modeled algebraically as an exponential decay process, while that due to slow drift modes and diffusional processes is integrated explicitly. This allows time integration step sizes much larger than those required by typical explicit numerical methods for stiff stochastic differential equations. The algorithm is motivated and discussed, and extensive numerical experiments are conducted to illustrate its accuracy and stability with a number of model systems.

More Details

Compressive sensing adaptation for polynomial chaos expansions

Journal of Computational Physics

Tsilifis, Panagiotis; Huan, Xun H.; Safta, Cosmin S.; Sargsyan, Khachik S.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, H.N.; Ghanem, Roger G.

Basis adaptation in Homogeneous Chaos spaces rely on a suitable rotation of the underlying Gaussian germ. Several rotations have been proposed in the literature resulting in adaptations with different convergence properties. In this paper we present a new adaptation mechanism that builds on compressive sensing algorithms, resulting in a reduced polynomial chaos approximation with optimal sparsity. The developed adaptation algorithm consists of a two-step optimization procedure that computes the optimal coefficients and the input projection matrix of a low dimensional chaos expansion with respect to an optimally rotated basis. We demonstrate the attractive features of our algorithm through several numerical examples including the application on Large-Eddy Simulation (LES) calculations of turbulent combustion in a HIFiRE scramjet engine.

More Details

Embedded Model Error Representation for Bayesian Model Calibration

arXiv.org Repository

Huan, Xun H.; Sargsyan, Khachik S.; Najm, H.N.

Model error estimation remains one of the key challenges in uncertainty quantification and predictive science. For computational models of complex physical systems, model error, also known as structural error or model inadequacy, is often the largest contributor to the overall predictive uncertainty. This work builds on a recently developed framework of embedded, internal model correction, in order to represent and quantify structural errors, together with model parameters,within a Bayesian inference context. We focus specifically on a Polynomial Chaos representation with additive modification of existing model parameters, enabling a non-intrusive procedure for efficient approximate likelihood construction, model error estimation, and disambiguation of model and data errors’ contributions to predictive uncertainty. The framework is demonstrated on several synthetic examples, as well as on a chemical ignition problem.

More Details

Enhancing model predictability for a scramjet using probabilistic learning on manifolds

AIAA Journal

Soize, Christian; Ghanem, Roger; Safta, Cosmin S.; Huan, Xun H.; Vane, Zachary P.; Oefelein, Joseph C.; Lacaze, Guilhem; Najm, H.N.

The computational burden of a large-eddy simulation for reactive flows is exacerbated in the presence of uncertainty in flow conditions or kinetic variables. A comprehensive statistical analysis, with a sufficiently large number of samples, remains elusive. Statistical learning is an approach that allows for extracting more information using fewer samples. Such procedures, if successful, will greatly enhance the predictability of models in the sense of improving exploration and characterization of uncertainty due to model error and input dependencies, all while being constrained by the size of the associated statistical samples. In this paper, it is shown how a recently developed procedure for probabilistic learning on manifolds can serve to improve the predictability in a probabilistic framework of a scramjet simulation. The estimates of the probability density functions of the quantities of interest are improved together with estimates of the statistics of their maxima. It is also demonstrated how the improved statistical model adds critical insight to the performance of the model.

More Details

Estimating the joint distribution of rate parameters across multiple reactions in the absence of experimental data

Proceedings of the Combustion Institute

Casey, Tiernan A.; Najm, H.N.

A procedure for determining the joint uncertainty of Arrhenius parameters across multiple combustion reactions of interest is demonstrated. This approach is capable of constructing the joint distribution of the Arrhenius parameters arising from the uncertain measurements performed in specific target experiments without having direct access to the underlying experimental data. The method involves constructing an ensemble of hypothetical data sets with summary statistics consistent with the available information reported by the experimentalists, followed by a fitting procedure that learns the structure of the joint parameter density across reactions using this consistent hypothetical data as evidence. The procedure is formalized in a Bayesian statistical framework, employing maximum-entropy and approximate Bayesian computation methods and utilizing efficient Markov chain Monte Carlo techniques to explore data and parameter spaces in a nested algorithm. We demonstrate the application of the method in the context of experiments designed to measure the rates of selected chain reactions in the H2-O2 system and highlight the utility of this approach for revealing the critical correlations between the parameters within a single reaction and across reactions, as well as for maximizing consistency when utilizing rate parameter information in predictive combustion modeling of systems of interest.

More Details

Enhancing statistical moment calculations for stochastic Galerkin solutions with Monte Carlo techniques

Journal of Computational Physics

Chowdhary, Kenny; Safta, Cosmin S.; Najm, H.N.

In this work, we provide a method for enhancing stochastic Galerkin moment calculations to the linear elliptic equation with random diffusivity using an ensemble of Monte Carlo solutions. This hybrid approach combines the accuracy of low-order stochastic Galerkin and the computational efficiency of Monte Carlo methods to provide statistical moment estimates which are significantly more accurate than performing each method individually. The hybrid approach involves computing a low-order stochastic Galerkin solution, after which Monte Carlo techniques are used to estimate the residual. We show that the combined stochastic Galerkin solution and residual is superior in both time and accuracy for a one-dimensional test problem and a more computational intensive two-dimensional linear elliptic problem for both the mean and variance quantities.

More Details

Interatomic Potentials Models for Cu-Ni and Cu-Zr Alloys

Safta, Cosmin S.; Geraci, Gianluca G.; Eldred, Michael S.; Najm, H.N.; Riegner, David R.; Windl, Wolfgang W.

This study explores a Bayesian calibration framework for the RAMPAGE alloy potential model for Cu-Ni and Cu-Zr systems, respectively. In RAMPAGE potentials, it is proposed that once calibrated potentials for individual elements are available, the inter-species interac- tions can be described by fitting a Morse potential for pair interactions with three parameters, while densities for the embedding function can be scaled by two parameters from the elemen- tal densities. Global sensitivity analysis tools were employed to understand the impact each parameter has on the MD simulation results. A transitional Markov Chain Monte Carlo al- gorithm was used to generate samples from the multimodal posterior distribution consistent with the discrepancy between MD simulation results and DFT data. For the Cu-Ni system the posterior predictive tests indicate that the fitted interatomic potential model agrees well with the DFT data, justifying the basic RAMPAGE assumtions. For the Cu-Zr system, where the phase diagram suggests more complicated atomic interactions than in the case of Cu-Ni, the RAMPAGE potential captured only a subset of the DFT data. The resulting posterior distri- bution for the 5 model parameters exhibited several modes, with each mode corresponding to specific simulation data and a suboptimal agreement with the DFT results.

More Details

Probabilistic inference of reaction rate parameters from summary statistics

Combustion Theory and Modelling

Khalil, Mohammad K.; Najm, H.N.

This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.

More Details

Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

Computer Methods in Applied Mechanics and Engineering

Rai, P.; Sargsyan, Khachik S.; Najm, H.N.

A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. The method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function as a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.

More Details

Chance-constrained economic dispatch with renewable energy and storage

Computational Optimization and Applications

Cheng, Jianqiang; Chen, Richard L.; Najm, H.N.; Pinar, Ali P.; Safta, Cosmin S.; Watson, Jean-Paul W.

Increasing penetration levels of renewables have transformed how power systems are operated. High levels of uncertainty in production make it increasingly difficulty to guarantee operational feasibility; instead, constraints may only be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, we require that wind energy contribute at least a prespecified proportion of the total demand and that the scheduled wind energy is deliverable with high probability. We develop an approximate partial sample average approximation (PSAA) framework to enable efficient solution of large-scale chance-constrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed satisfaction tolerance, and approximately 100 times faster than standard sample average approximation. Finally, the improved efficiency of our PSAA approach enables solution of a larger WECC-240 test system in minutes.

More Details

Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin S.; Sargsyan, Khachik S.; Geraci, Gianluca G.; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem L.; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Global sensitivity analysis and estimation of model error, toward uncertainty quantification in scramjet computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin S.; Geraci, Gianluca G.; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem M.; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertainparameters involvedandthe high computational costofflow simulations. These difficulties are addressedin this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying themin the current studyto large-eddy simulations ofajet incrossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system's stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green’s function theory

Molecular Physics

Rai, Prashant R.; Sargsyan, Khachik S.; Najm, H.N.; Hermes, Matthew R.; Hirata, So

A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm−1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

More Details

Inference given summary statistics

Handbook of Uncertainty Quantification

Najm, H.N.; Chowdhary, Kenny

In many practical situations, where one is interested in employing Bayesian inference methods to infer parameters of interest, a significant challenge is that actual data is not available. Rather, what is most commonly available in the literature are summary statistics on the data, on parameters of interest, or on functions thereof. In this chapter, we present a general framework relying on the maximum entropy principle, and employing approximate Bayesian computation methods, to infer a joint posterior density on parameters of interest given summary statistics, as well as other known details about the experiment or observational system behind the published statistics. By essentially redoing the experimental fitting using proposed data sets, the method ensures that the inferred joint posterior density on model parameters is consistent with the given statistics and with the model.

More Details

Computational singular perturbation analysis of stochastic chemical systems with stiffness

Journal of Computational Physics

Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, H.N.

Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

More Details

Uncertainty quantification in LES of channel flow

International Journal for Numerical Methods in Fluids

Safta, Cosmin S.; Blaylock, Myra L.; Templeton, Jeremy A.; Domino, Stefan P.; Sargsyan, Khachik S.; Najm, H.N.

In this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence and are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for. Copyright © 2016 John Wiley & Sons, Ltd.

More Details

Chemical model reduction under uncertainty

Combustion and Flame

Malpica Galassi, Riccardo; Valorani, Mauro; Najm, H.N.; Safta, Cosmin S.; Khalil, Mohammad K.; Ciottoli, Pietro P.

A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.

More Details

Inference of H2O2 thermal decomposition rate parameters from experimental statistics

10th U.S. National Combustion Meeting

Casey, Tiernan A.; Khalil, Mohammad K.; Najm, H.N.

The thermal decomposition of H2O2 is an important process in hydrocarbon combustion playing a particularly crucial role in providing a source of radicals at high pressure where it controls the 3rd explosion limit in the H2-O2 system, and also as a branching reaction in intermediatetemperature hydrocarbon oxidation. As such, understanding the uncertainty in the rate expression for this reaction is crucial for predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unreported in most investigations of elementary reaction rates, making the direct derivation of the joint uncertainty structure of the parameters in rate expressions difficult. To overcome this, we employ a statistical inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a posterior density on rate parameters for a selected case of laser absorption measurements in a shock tube study, subject to the constraints imposed by the reported experimental statistics. The procedure constructs a set of H2O2 concentration decay profiles consistent with these reported statistics. These consistent data sets are then used to determine the joint posterior density on the rate parameters through straightforward Bayesian inference. Broadly, the method also provides a framework for the replication and comparison of missing data from different experiments, based on reported statistics, for the generation of consensus rate expressions.

More Details

Inference of reaction rate parameters based on summary statistics from experiments

Proceedings of the Combustion Institute

Khalil, Mohammad K.; Chowdhary, K.; Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.

Bayesian inference and maximum entropy methods were employed for the estimation of the joint probability density for the Arrhenius rate parameters of the rate coefficient of the H2/O2-mechanism chain branching reaction H + O2 → OH + O. A consensus joint posterior on the parameters was obtained by pooling the posterior parameter densities given each consistent data set. Efficient surrogates for the OH concentration were constructed using a combination of Padé and polynomial approximants. Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation were used resulting in orders of magnitude speedup in data likelihood evaluation. The consistent data sets resulted in nearly Gaussian conditional parameter probability density functions. The resulting pooled parameter probability density function was propagated through stoichiometric H2-air auto-ignition computations to illustrate the necessity for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions to be considered.

More Details

Missing experimental data and rate parameter inference for H2+OH=H2O+H

2017 Fall Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2017

Casey, Tiernan A.; Najm, H.N.

The reaction of OH with H2 is a crucial chain-propagating step in the H2-O2 system thus making the specification of its rate, and its uncertainty, important for predicting the high-temperature combustion of hydrocarbons. In order to obtain an uncertain representation of this reaction rate in the absence of actual experimental data, we perform an inference procedure employing maximum entropy and approximate Bayesian computation methods to discover hypothetical data from a target shock-tube experiment designed to measure the reverse reaction rate. This method attempts to invert the fitting procedure from noisy measurement data to parameters, with associated uncertainty specifications, to arrive at candidate noisy data sets consistent with these reported parameters and their uncertainties. The uncertainty structure of the Arrhenius parameters is obtained by fitting each hypothetical data set in a Bayesian framework and pooling the resulting joint parameter posterior densities to arrive at a consensus density. We highlight the advantages of working with a data-centric representation of the experimental uncertainty with regards to model choice and consistency, and the ability for combining experimental evidence from multiple sources. Finally, we demonstrate the utility of knowledge of the joint Arrhenius parameter density for performing predictive modeling of combustion systems of interest.

More Details

Uncertainty Quantification in LES Computations of Turbulent Multiphase Combustion in a Scramjet Engine ? ScramjetUQ ?

Najm, H.N.; Debusschere, Bert D.; Safta, Cosmin S.; Sargsyan, Khachik S.; Huan, Xun H.; Oefelein, Joseph C.; Lacaze, Guilhem M.; Vane, Zachary P.; Eldred, Michael S.; Geraci, Gianluca G.; Knio, Omar K.; Sraj, I.S.; Scovazzi, G.S.; Colomes, O.C.; Marzouk, Y.M.; Zahm, O.Z.; Menhorn, F.M.; Ghanem, R.G.; Tsilifis, P.T.

Abstract not provided.

Uncertainty Quantification in LES Computations of Turbulent Multiphase Combustion in a Scramjet Engine

Najm, H.N.; Debusschere, Bert D.; Safta, Cosmin S.; Sargsyan, Khachik S.; Huan, Xun H.; Oefelein, Joseph C.; Lacaze, Guilhem M.; Vane, Zachary P.; Eldred, Michael S.; Geraci, G.G.; Knio, O.K.; Sraj, I.S.; Scovazzi, G.S.; Colomes, O.C.; Marzouk, Y.M.; Zahm, O.Z.; Augustin, F.A.; Menhorn, F.M.; Ghanem, R.G.; Tsilifis, P.T.

Abstract not provided.

Bayesian estimation of Karhunen-Loève expansions; A random subspace approach

Journal of Computational Physics

Chowdhary, Kenny; Najm, H.N.

One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.

More Details

Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

Templeton, Jeremy A.; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi R.; Ling, Julia L.; Najm, H.N.; Ruiz, Anthony R.; Safta, Cosmin S.; Sargsyan, Khachik S.; Stewart, Alessia S.; Wagner, Gregory L.

The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

More Details

Fault Resilient Domain Decomposition Preconditioner for PDEs

Sargsyan, Khachik S.; Sargsyan, Khachik S.; Safta, Cosmin S.; Safta, Cosmin S.; Debusschere, Bert D.; Debusschere, Bert D.; Najm, H.N.; Najm, H.N.; Rizzi, Francesco N.; Rizzi, Francesco N.; Morris Wright, Karla V.; Morris Wright, Karla V.; Mycek, Paul M.; Mycek, Paul M.; Maitre, Olivier L.; Maitre, Olivier L.; Knio, Omar K.; Knio, Omar K.

The move towards extreme-scale computing platforms challenges scientific simula- tions in many ways. Given the recent tendencies in computer architecture development, one needs to reformulate legacy codes in order to cope with large amounts of commu- nication, system faults and requirements of low-memory usage per core. In this work, we develop a novel framework for solving partial differential equa- tions (PDEs) via domain decomposition that reformulates the solution as a state-of- knowledge with a probabilistic interpretation. Such reformulation allows resiliency with respect to potential faults without having to apply fault detection, avoids unnecessary communication and is generally well-positioned for rigorous uncertainty quantification studies that target improvements of predictive fidelity of scientific models. We demon- strate our algorithm for one-dimensional PDE examples where artificial faults have been implemented as bit-flips in the binary representation of subdomain solutions. *Sandia National Laboratories, 7011 East Ave, MS 9051, Livermore, CA 94550 (ksargsy@sandia.gov). t Sandia National Laboratories, Livermore, CA (fnrizzi@sandia.gov). IDuke University, Durham, NC (paul .mycek@duke . edu). Sandia National Laboratories, Livermore, CA (csaft a@sandia.gov). i llSandia National Laboratories, Livermore, CA (knmorri@sandia.gov). II Sandia National Laboratories, Livermore, CA (hnnajm@sandia.gov). **Laboratoire d'Informatique pour la Mecanique et les Sciences de l'Ingenieur, Orsay, France (olm@limsi . f r). ttDuke University, Durham, NC (omar . knio@duke . edu). It Sandia National Laboratories, Livermore, CA (bjdebus@sandia.gov).

More Details

Hybrid discrete/continuum algorithms for stochastic reaction networks

Journal of Computational Physics

Safta, Cosmin S.; Sargsyan, Khachik S.; Debusschere, Bert D.; Najm, H.N.

Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. The performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

More Details

Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

Journal of Aerospace Information Systems

Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.; Chowdhary, Kenny; Debusschere, Bert D.; Swiler, Laura P.; Eldred, Michael S.

In this paper, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory-epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

More Details

Toward using surrogates to accelerate solution of stochastic electricity grid operations problems

2014 North American Power Symposium, NAPS 2014

Safta, Cosmin S.; Chen, Richard L.; Najm, H.N.; Pinar, Ali P.; Watson, Jean-Paul W.

Stochastic unit commitment models typically handle uncertainties in forecast demand by considering a finite number of realizations from a stochastic process model for loads. Accurate evaluations of expectations or higher moments for the quantities of interest require a prohibitively large number of model evaluations. In this paper we propose an alternative approach based on using surrogate models valid over the range of the forecast uncertainty. We consider surrogate models based on Polynomial Chaos expansions, constructed using sparse quadrature methods. Considering expected generation cost, we demonstrate that the approach can lead to several orders of magnitude reduction in computational cost relative to using Monte Carlo sampling on the original model, for a given target error threshold.

More Details

Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

Liu, Zhen L.; Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.; van Bloemen Waanders, Bart G.; LaFranchi, Brian L.; Ivey, Mark D.; Schrader, Paul E.; Michelsen, Hope A.; Bambha, Ray B.

In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF assimilated meteorology fields, making it possible to perform a hybrid simulation, in which the Eulerian model (CMAQ) can be used to compute the initial condi- tion needed by the Lagrangian model, while the source-receptor relationships for a large state vector can be efficiently computed using the Lagrangian model in its backward mode. In ad- dition, CMAQ has a complete treatment of atmospheric chemistry of a suite of traditional air pollutants, many of which could help attribute GHGs from different sources. The inference of emissions sources using atmospheric observations is cast as a Bayesian model calibration problem, which is solved using a variety of Bayesian techniques, such as the bias-enhanced Bayesian inference algorithm, which accounts for the intrinsic model deficiency, Polynomial Chaos Expansion to accelerate model evaluation and Markov Chain Monte Carlo sampling, and Karhunen-Lo %60 eve (KL) Expansion to reduce the dimensionality of the state space. We have established an atmospheric measurement site in Livermore, CA and are collect- ing continuous measurements of CO2 , CH4 and other species that are typically co-emitted with these GHGs. Measurements of co-emitted species can assist in attributing the GHGs to different emissions sectors. Automatic calibrations using traceable standards are performed routinely for the gas-phase measurements. We are also collecting standard meteorological data at the Livermore site as well as planetary boundary height measurements using a ceilometer. The location of the measurement site is well suited to sample air transported between the San Francisco Bay area and the California Central Valley.

More Details

A second-order coupled immersed boundary-SAMR construction for chemically reacting flow over a heat-conducting Cartesian grid-conforming solid

Journal of Computational Physics

Kedia, Kushal S.; Safta, Cosmin S.; Ray, Jaideep R.; Najm, H.N.; Ghoniem, Ahmed F.

In this paper, we present a second-order numerical method for simulations of reacting flow around heat-conducting immersed solid objects. The method is coupled with a block-structured adaptive mesh refinement (SAMR) framework and a low-Mach number operator-split projection algorithm. A "buffer zone" methodology is introduced to impose the solid-fluid boundary conditions such that the solver uses symmetric derivatives and interpolation stencils throughout the interior of the numerical domain; irrespective of whether it describes fluid or solid cells. Solid cells are tracked using a binary marker function. The no-slip velocity boundary condition at the immersed wall is imposed using the staggered mesh. Near the immersed solid boundary, single-sided buffer zones (inside the solid) are created to resolve the species discontinuities, and dual buffer zones (inside and outside the solid) are created to capture the temperature gradient discontinuities. The development discussed in this paper is limited to a two-dimensional Cartesian grid-conforming solid. We validate the code using benchmark simulations documented in the literature. We also demonstrate the overall second-order convergence of our numerical method. To demonstrate its capability, a reacting flow simulation of a methane/air premixed flame stabilized on a channel-confined bluff-body using a detailed chemical kinetics model is discussed. © 2014 Elsevier Inc.

More Details

Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

Safta, Cosmin S.; Najm, H.N.; Phipps, Eric T.

Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

More Details

Data free inference with processed data products

Statistics and Computing

Najm, H.N.; Chowdhary, Kamaljit S.

Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

More Details

Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems

Journal of Computational Physics

Najm, H.N.; Valorani, Mauro

We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-fly during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. The filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits. © 2014.

More Details
Results 1–200 of 378
Results 1–200 of 378