To impact physical mechanical system design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages of the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics, as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods avoid the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. These methods work by leveraging the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies that are individually modified or replaced to define new system designs. By doing so, these methods enable rapid model development and the incorporation of uncertainty quantification earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models, which exchange solution information at component boundaries. We present a pair of approaches for propagating uncertainty in this type of decomposed system and provide implementations in the form of an open-source software library. We demonstrate these tools on a variety of applications and demonstrate the impact of problem-specific details on the performance and accuracy of the resulting UQ analysis. This work represents the most comprehensive investigation of these network uncertainty propagation methods to date.
Thermally activated batteries undergo a series of coupled physical changes during activation that influence battery performance. These processes include energetic material burning, heat transfer, electrolyte phase change, capillary-driven two-phase porous flow, ion transport, electrochemical reactions, and electrical transport. Several of these processes are strongly coupled and have a significant effect on battery performance, but others have minimal impact or may be suitably represented by reduced-order models. Assessing the relative importance of these phenomena must be based on comparisons to a high-fidelity model including all known processes. In this work, we first present and demonstrate a high-fidelity, multi-physics model of electrochemical performance. This novel multi-physics model enables predictions of how competing physical processes affect battery performance and provides unique insights into the difficult-to-measure processes that happen during battery activation. We introduce four categories of model fidelity that include different physical simplifications, assumptions, and reduced-order models to decouple or remove costly elements of the simulation. Using this approach, we show an order-of-magnitude reduction in computational cost while preserving all design-relevant quantities of interest within 5 percent. The validity of this approach and these model reductions is demonstrated by comparison between results from the full fidelity model and the different reduced models.
This paper addresses two challenges in Bayesian calibration: (1) computational speed of existing sampling algorithms and (2) calibration with spatiotemporal responses. The commonly used Markov chain Monte Carlo (MCMC) approaches require many sequential model evaluations making the computational expense prohibitive. This paper proposes an efficient sampling algorithm: iterative importance sampling with genetic algorithm (IISGA). While iterative importance sampling enables computational efficiency, the genetic algorithm enables robustness by preventing sample degeneration and avoids getting stuck in multimodal search spaces. An inflated likelihood further enables robustness in high-dimensional parameter spaces by enlarging the target distribution. Spatiotemporal data complicate both surrogate modeling, which is necessary for expensive computational models, and the likelihood estimation. In this work, singular value decomposition is investigated for reducing the high-dimensional field data to a lower-dimensional space prior to Bayesian calibration. Then the likelihood is formulated and Bayesian inference is performed in the lower-dimension, latent space. An illustrative example is provided to demonstrate IISGA relative to existing sampling methods, and then IISGA is employed to calibrate a thermal battery model with 26 uncertain calibration parameters and spatiotemporal response data.
A strategy to optimize the thermal efficiency of falling particle receivers (FPRs) in concentrating solar power applications is described in this paper. FPRs are a critical component of a falling particle system, and receiver designs with high thermal efficiencies (~90%) for particle outlet temperatures > 700°C have been targeted for next generation systems. Advective losses are one of the most significant loss mechanisms for FPRs. Hence, this optimization aims to find receiver geometries that passively minimize these losses. The optimization strategy consists of a series of simulations varying different geometric parameters on a conceptual receiver design for the Generation 3 Particle Pilot Plant (G3P3) project using simplified CFD models to model the flow. A linear polynomial surrogate model was fit to the resulting data set, and a global optimization routine was then executed on the surrogate to reveal an optimized receiver geometry that minimized advective losses. This optimized receiver geometry was then evaluated with more rigorous CFD models, revealing a thermal efficiency of 86.9% for an average particle temperature increase of 193.6°C and advective losses less than 3.5% of the total incident thermal power in quiescent conditions.
Causality in an engineered system pertains to how a system output changes due to a controlled change or intervention on the system or system environment. Engineered systems designs reflect a causal theory regarding how a system will work, and predicting the reliability of such systems typically requires knowledge of this underlying causal structure. The aim of this work is to introduce causal modeling tools that inform reliability predictions based on biased data sources. We present a novel application of the popular structural causal modeling (SCM) framework to reliability estimation in an engineering application, illustrating how this framework can inform whether reliability is estimable and how to estimate reliability given a set of data and assumptions about the subject matter and data generating mechanism. When data are insufficient for estimation, sensitivity studies based on problem-specific knowledge can inform how much reliability estimates can change due to biases in the data and what information should be collected next to provide the most additional information. We apply the approach to a pedagogical example related to a real, but proprietary, engineering application, considering how two types of biases in data can influence a reliability calculation.
Current quantification of margin and uncertainty (QMU) guidance lacks a consistent framework for communicating the credibility of analysis results. Recent efforts at providing QMU guidance have pushed for broadening the analyses supporting QMU results beyond extrapolative statistical models to include a more holistic picture of risk, including information garnered from both experimental campaigns and computational simulations. Credibility guidance would assist in the consideration of belief-based aspects of an analysis. Such guidance exists for presenting computational simulation-based analyses and is under development for the integration of experimental data into computational simulations (calibration or validation), but is absent for the ultimate QMU product resulting from experimental or computational analyses. A QMU credibility assessment framework comprised of five elements is proposed: requirement definitions and quantity of interest selection, data quality, model uncertainty, calibration/parameter estimation, and validation. Through considering and reporting on these elements during a QMU analysis, the decision-maker will receive a more complete description of the analysis and be better positioned to understand the risks involved with using the analysis to support a decision. A molten salt battery application is used to demonstrate the proposed QMU credibility framework.
When making computational simulation predictions of multiphysics engineering systems, sources of uncertainty in the prediction need to be acknowledged and included in the analysis within the current paradigm of striving for simulation credibility. A thermal analysis of an aerospace geometry was performed at Sandia National Laboratories. Here, for this analysis, a verification, validation, and uncertainty quantification (VVUQ) workflow provided structure for the analysis, resulting in the quantification of significant uncertainty sources including spatial numerical error and material property parametric uncertainty. It was hypothesized that the parametric uncertainty and numerical errors were independent and separable for this application. This hypothesis was supported by performing uncertainty quantification (UQ) simulations at multiple mesh resolutions, while being limited by resources to minimize the number of medium and high resolution simulations. In conclusion, based on this supported hypothesis, a prediction including parametric uncertainty and a systematic mesh bias is used to make a margin assessment that avoids unnecessary uncertainty obscuring the results and optimizes use of computing resources.
This paper examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a finite element model of a ductile steel can being slowly crushed. Over 140 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) are tracked in the simulations. Each response quantity’s behavior varies according to the particular stress-strain curves used for the materials in the model. We desire to estimate response variability when only a few stress-strain curve samples are available from material testing. Propagation of just a few samples will usually result in significantly underestimated response uncertainty relative to propagation of a much larger population that adequately samples the presiding random-function source. A simple classical statistical method, Tolerance Intervals, is tested for effectively treating sparse stress-strain curve data. The method is found to perform well on the highly nonlinear input-to-output response mappings and non-standard response distributions in the can-crush problem. The results and discussion in this paper support a proposition that the method will apply similarly well for other sparsely sampled random variable or function data, whether from experiments or models. Finally, the simple Tolerance Interval method is also demonstrated to be very economical.
When making computational simulation predictions of multi-physics engineering systems, sources of uncertainty in the prediction need to be acknowledged and included in the analysis within the current paradigm of striving for simulation credibility. A thermal analysis of an aerospace geometry was performed at Sandia National Laboratories. For this analysis a verification, validation and uncertainty quantification workflow provided structure for the analysis, resulting in the quantification of significant uncertainty sources including spatial numerical error and material property parametric uncertainty. It was hypothesized that the parametric uncertainty and numerical errors were independent and separable for this application. This hypothesis was supported by performing uncertainty quantification simulations at multiple mesh resolutions, while being limited by resources to minimize the number of medium and high resolution simulations. Based on this supported hypothesis, a prediction including parametric uncertainty and a systematic mesh bias are used to make a margin assessment that avoids unnecessary uncertainty obscuring the results and optimizes computing resources.
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of the risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.
A discussion of the five responses to the 2014 Sandia Verification and Validation (V&V) Challenge Problem, presented within this special issue, is provided hereafter. Overviews of the challenge problem workshop, workshop participants, and the problem statement are also included. Brief summations of teams' responses to the challenge problem are provided. Issues that arose throughout the responses that are deemed applicable to the general verification, validation, and uncertainty quantification (VVUQ) community are the main focal point of this paper. The discussion is oriented and organized into big picture comparison of data and model usage, VVUQ activities, and differentiating conceptual themes behind the teams' VVUQ strategies. Significant differences are noted in the teams' approaches toward all VVUQ activities, and those deemed most relevant are discussed. Beyond the specific details of VVUQ implementations, thematic concepts are found to create differences among the approaches; some of the major themes are discussed. Lastly, an encapsulation of the key contributions, the lessons learned, and advice for the future are presented.
This work examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a transient dynamics finite element model of a ductile steel can being slowly crushed. An elastic-plastic constitutive model is employed in the large-deformation simulations. The present work assigns the same material to all the can parts: lids, walls, and weld. Time histories of 18 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) at several locations on the can and various points in time are monitored in the simulations. Each response quantity's behavior varies according to the particular stressstrain curves used for the materials in the model. We estimate response variability due to variability of the input material curves. When only a few stress-strain curves are available from material testing, response variance will usually be significantly underestimated. This is undesirable for many engineering purposes. This paper describes the can-crush model and simulations used to evaluate a simple classical statistical method, Tolerance Intervals (TIs), for effectively compensating for sparse stress-strain curve data in the can-crush problem. Using the simulation results presented here, the accuracy and reliability of the TI method are being evaluated on the highly nonlinear inputto- output response mappings and non-standard response distributions in the can-crush UQ problem.