Intrusive UQ Algorithms for Emerging Computing Platforms
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal for Numerical Methods in Fluids
In this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence and are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for. Copyright © 2016 John Wiley & Sons, Ltd.
Abstract not provided.
Combustion and Flame
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
10th U.S. National Combustion Meeting
The thermal decomposition of H2O2 is an important process in hydrocarbon combustion playing a particularly crucial role in providing a source of radicals at high pressure where it controls the 3rd explosion limit in the H2-O2 system, and also as a branching reaction in intermediatetemperature hydrocarbon oxidation. As such, understanding the uncertainty in the rate expression for this reaction is crucial for predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unreported in most investigations of elementary reaction rates, making the direct derivation of the joint uncertainty structure of the parameters in rate expressions difficult. To overcome this, we employ a statistical inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a posterior density on rate parameters for a selected case of laser absorption measurements in a shock tube study, subject to the constraints imposed by the reported experimental statistics. The procedure constructs a set of H2O2 concentration decay profiles consistent with these reported statistics. These consistent data sets are then used to determine the joint posterior density on the rate parameters through straightforward Bayesian inference. Broadly, the method also provides a framework for the replication and comparison of missing data from different experiments, based on reported statistics, for the generation of consensus rate expressions.
Abstract not provided.
Proceedings of the Combustion Institute
Bayesian inference and maximum entropy methods were employed for the estimation of the joint probability density for the Arrhenius rate parameters of the rate coefficient of the H2/O2-mechanism chain branching reaction H + O2 → OH + O. A consensus joint posterior on the parameters was obtained by pooling the posterior parameter densities given each consistent data set. Efficient surrogates for the OH concentration were constructed using a combination of Padé and polynomial approximants. Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation were used resulting in orders of magnitude speedup in data likelihood evaluation. The consistent data sets resulted in nearly Gaussian conditional parameter probability density functions. The resulting pooled parameter probability density function was propagated through stoichiometric H2-air auto-ignition computations to illustrate the necessity for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions to be considered.
2017 Fall Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2017
The reaction of OH with H2 is a crucial chain-propagating step in the H2-O2 system thus making the specification of its rate, and its uncertainty, important for predicting the high-temperature combustion of hydrocarbons. In order to obtain an uncertain representation of this reaction rate in the absence of actual experimental data, we perform an inference procedure employing maximum entropy and approximate Bayesian computation methods to discover hypothetical data from a target shock-tube experiment designed to measure the reverse reaction rate. This method attempts to invert the fitting procedure from noisy measurement data to parameters, with associated uncertainty specifications, to arrive at candidate noisy data sets consistent with these reported parameters and their uncertainties. The uncertainty structure of the Arrhenius parameters is obtained by fitting each hypothetical data set in a Bayesian framework and pooling the resulting joint parameter posterior densities to arrive at a consensus density. We highlight the advantages of working with a data-centric representation of the experimental uncertainty with regards to model choice and consistency, and the ability for combining experimental evidence from multiple sources. Finally, we demonstrate the utility of knowledge of the joint Arrhenius parameter density for performing predictive modeling of combustion systems of interest.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.