To impact physical mechanical system design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages of the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics, as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods avoid the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. These methods work by leveraging the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies that are individually modified or replaced to define new system designs. By doing so, these methods enable rapid model development and the incorporation of uncertainty quantification earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models, which exchange solution information at component boundaries. We present a pair of approaches for propagating uncertainty in this type of decomposed system and provide implementations in the form of an open-source software library. We demonstrate these tools on a variety of applications and demonstrate the impact of problem-specific details on the performance and accuracy of the resulting UQ analysis. This work represents the most comprehensive investigation of these network uncertainty propagation methods to date.
Thermally activated batteries undergo a series of coupled physical changes during activation that influence battery performance. These processes include energetic material burning, heat transfer, electrolyte phase change, capillary-driven two-phase porous flow, ion transport, electrochemical reactions, and electrical transport. Several of these processes are strongly coupled and have a significant effect on battery performance, but others have minimal impact or may be suitably represented by reduced-order models. Assessing the relative importance of these phenomena must be based on comparisons to a high-fidelity model including all known processes. In this work, we first present and demonstrate a high-fidelity, multi-physics model of electrochemical performance. This novel multi-physics model enables predictions of how competing physical processes affect battery performance and provides unique insights into the difficult-to-measure processes that happen during battery activation. We introduce four categories of model fidelity that include different physical simplifications, assumptions, and reduced-order models to decouple or remove costly elements of the simulation. Using this approach, we show an order-of-magnitude reduction in computational cost while preserving all design-relevant quantities of interest within 5 percent. The validity of this approach and these model reductions is demonstrated by comparison between results from the full fidelity model and the different reduced models.
This paper addresses two challenges in Bayesian calibration: (1) computational speed of existing sampling algorithms and (2) calibration with spatiotemporal responses. The commonly used Markov chain Monte Carlo (MCMC) approaches require many sequential model evaluations making the computational expense prohibitive. This paper proposes an efficient sampling algorithm: iterative importance sampling with genetic algorithm (IISGA). While iterative importance sampling enables computational efficiency, the genetic algorithm enables robustness by preventing sample degeneration and avoids getting stuck in multimodal search spaces. An inflated likelihood further enables robustness in high-dimensional parameter spaces by enlarging the target distribution. Spatiotemporal data complicate both surrogate modeling, which is necessary for expensive computational models, and the likelihood estimation. In this work, singular value decomposition is investigated for reducing the high-dimensional field data to a lower-dimensional space prior to Bayesian calibration. Then the likelihood is formulated and Bayesian inference is performed in the lower-dimension, latent space. An illustrative example is provided to demonstrate IISGA relative to existing sampling methods, and then IISGA is employed to calibrate a thermal battery model with 26 uncertain calibration parameters and spatiotemporal response data.
A strategy to optimize the thermal efficiency of falling particle receivers (FPRs) in concentrating solar power applications is described in this paper. FPRs are a critical component of a falling particle system, and receiver designs with high thermal efficiencies (~90%) for particle outlet temperatures > 700°C have been targeted for next generation systems. Advective losses are one of the most significant loss mechanisms for FPRs. Hence, this optimization aims to find receiver geometries that passively minimize these losses. The optimization strategy consists of a series of simulations varying different geometric parameters on a conceptual receiver design for the Generation 3 Particle Pilot Plant (G3P3) project using simplified CFD models to model the flow. A linear polynomial surrogate model was fit to the resulting data set, and a global optimization routine was then executed on the surrogate to reveal an optimized receiver geometry that minimized advective losses. This optimized receiver geometry was then evaluated with more rigorous CFD models, revealing a thermal efficiency of 86.9% for an average particle temperature increase of 193.6°C and advective losses less than 3.5% of the total incident thermal power in quiescent conditions.
Causality in an engineered system pertains to how a system output changes due to a controlled change or intervention on the system or system environment. Engineered systems designs reflect a causal theory regarding how a system will work, and predicting the reliability of such systems typically requires knowledge of this underlying causal structure. The aim of this work is to introduce causal modeling tools that inform reliability predictions based on biased data sources. We present a novel application of the popular structural causal modeling (SCM) framework to reliability estimation in an engineering application, illustrating how this framework can inform whether reliability is estimable and how to estimate reliability given a set of data and assumptions about the subject matter and data generating mechanism. When data are insufficient for estimation, sensitivity studies based on problem-specific knowledge can inform how much reliability estimates can change due to biases in the data and what information should be collected next to provide the most additional information. We apply the approach to a pedagogical example related to a real, but proprietary, engineering application, considering how two types of biases in data can influence a reliability calculation.
Current quantification of margin and uncertainty (QMU) guidance lacks a consistent framework for communicating the credibility of analysis results. Recent efforts at providing QMU guidance have pushed for broadening the analyses supporting QMU results beyond extrapolative statistical models to include a more holistic picture of risk, including information garnered from both experimental campaigns and computational simulations. Credibility guidance would assist in the consideration of belief-based aspects of an analysis. Such guidance exists for presenting computational simulation-based analyses and is under development for the integration of experimental data into computational simulations (calibration or validation), but is absent for the ultimate QMU product resulting from experimental or computational analyses. A QMU credibility assessment framework comprised of five elements is proposed: requirement definitions and quantity of interest selection, data quality, model uncertainty, calibration/parameter estimation, and validation. Through considering and reporting on these elements during a QMU analysis, the decision-maker will receive a more complete description of the analysis and be better positioned to understand the risks involved with using the analysis to support a decision. A molten salt battery application is used to demonstrate the proposed QMU credibility framework.
When making computational simulation predictions of multiphysics engineering systems, sources of uncertainty in the prediction need to be acknowledged and included in the analysis within the current paradigm of striving for simulation credibility. A thermal analysis of an aerospace geometry was performed at Sandia National Laboratories. Here, for this analysis, a verification, validation, and uncertainty quantification (VVUQ) workflow provided structure for the analysis, resulting in the quantification of significant uncertainty sources including spatial numerical error and material property parametric uncertainty. It was hypothesized that the parametric uncertainty and numerical errors were independent and separable for this application. This hypothesis was supported by performing uncertainty quantification (UQ) simulations at multiple mesh resolutions, while being limited by resources to minimize the number of medium and high resolution simulations. In conclusion, based on this supported hypothesis, a prediction including parametric uncertainty and a systematic mesh bias is used to make a margin assessment that avoids unnecessary uncertainty obscuring the results and optimizes use of computing resources.