Concurrent V&V/UQ and Code Capability Development: Hypersonic Reentry Analysis with SPARC
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Scitech Forum
We propose herein a probabilistic framework for assessing the consistency of an experimental dataset, i.e., whether the stated experimental conditions are consistent with the measurements provided. In case the dataset is inconsistent, our framework allows one to hypothesize and test sources of inconsistencies. This is crucial in model validation efforts. The framework relies on Bayesian inference to estimate experimental settings deemed uncertain, from measurements deemed accurate. The quality of the inferred variables is gauged by its ability to reproduce held-out experimental measurements. We test the correctness of the framework on three double-cone experiments conducted in the CUBRC Inc.'s LENS-I shock tunnel, which have also been numerically simulated successfully. Thereafter, we use the framework to investigate two double-cone experiments (executed in the LENS-XX shock tunnel) which have encountered difficulties when used in model validation exercises. We detect an inconsistency with one of the LENS-XX experiments. In addition, we hypothesize two causes for our inability to simulate LEXS-XX experiments accurately and test them using our framework. We find that there is no single cause that explains all the discrepancies between model predictions and experimental data, but different causes explain different discrepancies, to larger or smaller extent. We end by proposing that uncertainty quantification methods be used more widely to understand experiments and characterize facilities, and we cite three different methods to do so, the third of which we present in this paper.
Conference Proceedings of the Society for Experimental Mechanics Series
Experiments are a critical part of the model validation process, and the credibility of the resulting simulations are themselves dependent on the credibility of the experiments. The impact of experimental credibility on model validation occurs at several points through the model validation and uncertainty quantification (MVUQ) process. Many aspects of experiments involved in the development and verification and validation (V&V) of computational simulations will impact the overall simulation credibility. In this document, we define experimental credibility in the context of model validation and decision making. We summarize possible elements for evaluating experimental credibility, sometimes drawing from existing and preliminary frameworks developed for evaluation of computational simulation credibility. The proposed framework is an expert elicitation tool for planning, assessing, and communicating the completeness and correctness of an experiment (“test”) in the context of its intended use—validation. The goals of the assessment are (1) to encourage early communication and planning between the experimentalist, computational analyst, and customer, and (2) the communication of experimental credibility. This assessment tool could also be used to decide between potential existing data sets to be used for validation. The evidence and story of experimental credibility will support the communication of overall simulation credibility.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.