CSPlib is an open source software library for analyzing general ordinary differential equation (ODE) systems and detailed chemical kinetic ODE systems. It relies on the computational singular perturbation (CSP) method for the analysis of these systems. The software provides support for: General ODE models (gODE model class) for computing source terms and Jacobians for a generic ODE system; TChem model (ChemElemODETChem model class) for computing source term, Jacobian, other necessary chemical reaction data, as well as the rates of progress for a homogenous batch reactor using an elementary step detailed chemical kinetic reaction mechanism. This class relies on the TChem [2] library; A set of functions to compute essential elements of CSP analysis (Kernel class). This includes computations of the eigensolution of the Jacobian matrix, CSP basis vectors and co-vectors, time scales (reciprocals of the magnitudes of the Jacobian eigenvalues), mode amplitudes, CSP pointers, and the number of exhausted modes. This class relies on the Tines library; A set of functions to compute the eigensolution of the Jacobian matrix using Tines library GPU eigensolver; A set of functions to compute CSP indices (Index Class). This includes participation indices and both slow and fast importance indices.
Fundamental results and an efficient algorithm for constructing eigenvectors corresponding to non-zero eigenvalues of matrices with zero rows and/or columns are developed. The formulation is based on the relation between eigenvectors of such matrices and the eigenvectors of their submatrices after removing all zero rows and columns. While being easily implemented, the algorithm decreases the computation time needed for numerical eigenanalysis, and resolves potential numerical eigensolver instabilities.
A stable explicit time-scale splitting algorithm for stiff chemical Langevin equations (CLEs) is developed, based on the concept of computational singular perturbation. The drift term of the CLE is projected onto basis vectors that span the fast and slow subdomains. The corresponding fast modes exhaust quickly, in the mean sense, and the system state then evolves, with a mean drift controlled by slow modes, on a random manifold. The drift-driven time evolution of the state due to fast exhausted modes is modeled algebraically as an exponential decay process, while that due to slow drift modes and diffusional processes is integrated explicitly. This allows time integration step sizes much larger than those required by typical explicit numerical methods for stiff stochastic differential equations. The algorithm is motivated and discussed, and extensive numerical experiments are conducted to illustrate its accuracy and stability with a number of model systems.
Basis adaptation in Homogeneous Chaos spaces rely on a suitable rotation of the underlying Gaussian germ. Several rotations have been proposed in the literature resulting in adaptations with different convergence properties. In this paper we present a new adaptation mechanism that builds on compressive sensing algorithms, resulting in a reduced polynomial chaos approximation with optimal sparsity. The developed adaptation algorithm consists of a two-step optimization procedure that computes the optimal coefficients and the input projection matrix of a low dimensional chaos expansion with respect to an optimally rotated basis. We demonstrate the attractive features of our algorithm through several numerical examples including the application on Large-Eddy Simulation (LES) calculations of turbulent combustion in a HIFiRE scramjet engine.
Model error estimation remains one of the key challenges in uncertainty quantification and predictive science. For computational models of complex physical systems, model error, also known as structural error or model inadequacy, is often the largest contributor to the overall predictive uncertainty. This work builds on a recently developed framework of embedded, internal model correction, in order to represent and quantify structural errors, together with model parameters,within a Bayesian inference context. We focus specifically on a Polynomial Chaos representation with additive modification of existing model parameters, enabling a non-intrusive procedure for efficient approximate likelihood construction, model error estimation, and disambiguation of model and data errors’ contributions to predictive uncertainty. The framework is demonstrated on several synthetic examples, as well as on a chemical ignition problem.
The computational burden of a large-eddy simulation for reactive flows is exacerbated in the presence of uncertainty in flow conditions or kinetic variables. A comprehensive statistical analysis, with a sufficiently large number of samples, remains elusive. Statistical learning is an approach that allows for extracting more information using fewer samples. Such procedures, if successful, will greatly enhance the predictability of models in the sense of improving exploration and characterization of uncertainty due to model error and input dependencies, all while being constrained by the size of the associated statistical samples. In this paper, it is shown how a recently developed procedure for probabilistic learning on manifolds can serve to improve the predictability in a probabilistic framework of a scramjet simulation. The estimates of the probability density functions of the quantities of interest are improved together with estimates of the statistics of their maxima. It is also demonstrated how the improved statistical model adds critical insight to the performance of the model.
A procedure for determining the joint uncertainty of Arrhenius parameters across multiple combustion reactions of interest is demonstrated. This approach is capable of constructing the joint distribution of the Arrhenius parameters arising from the uncertain measurements performed in specific target experiments without having direct access to the underlying experimental data. The method involves constructing an ensemble of hypothetical data sets with summary statistics consistent with the available information reported by the experimentalists, followed by a fitting procedure that learns the structure of the joint parameter density across reactions using this consistent hypothetical data as evidence. The procedure is formalized in a Bayesian statistical framework, employing maximum-entropy and approximate Bayesian computation methods and utilizing efficient Markov chain Monte Carlo techniques to explore data and parameter spaces in a nested algorithm. We demonstrate the application of the method in the context of experiments designed to measure the rates of selected chain reactions in the H2-O2 system and highlight the utility of this approach for revealing the critical correlations between the parameters within a single reaction and across reactions, as well as for maximizing consistency when utilizing rate parameter information in predictive combustion modeling of systems of interest.
In this work, we provide a method for enhancing stochastic Galerkin moment calculations to the linear elliptic equation with random diffusivity using an ensemble of Monte Carlo solutions. This hybrid approach combines the accuracy of low-order stochastic Galerkin and the computational efficiency of Monte Carlo methods to provide statistical moment estimates which are significantly more accurate than performing each method individually. The hybrid approach involves computing a low-order stochastic Galerkin solution, after which Monte Carlo techniques are used to estimate the residual. We show that the combined stochastic Galerkin solution and residual is superior in both time and accuracy for a one-dimensional test problem and a more computational intensive two-dimensional linear elliptic problem for both the mean and variance quantities.