Publications

126 Results
Skip to search filters

What can simulation test beds teach us about social science? Results of the ground truth program

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Krofcheck, Daniel J.; Warrender, Christina E.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Emery, Ben; Murdock, Jaimie; Bernard, Michael L.; Romero, Vicente J.

The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.

More Details

Feedback density and causal complexity of simulation model structure

Journal of Simulation

Naugle, Asmeret B.; Verzi, Stephen J.; Lakkaraju, Kiran L.; Swiler, Laura P.; Warrender, Christina E.; Bernard, Michael L.; Romero, Vicente J.

Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.

More Details

Graph-Based Similarity Metrics for Comparing Simulation Model Causal Structures

Naugle, Asmeret B.; Swiler, Laura P.; Lakkaraju, Kiran L.; Verzi, Stephen J.; Warrender, Christina E.; Romero, Vicente J.

The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.

More Details

The Ground Truth Program: Simulations as Test Beds for Social Science Research Methods.

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Russell, Adam R.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Romero, Vicente J.

Social systems are uniquely complex and difficult to study, but understanding them is vital to solving the world’s problems. The Ground Truth program developed a new way of testing the research methods that attempt to understand and leverage the Human Domain and its associated complexities. The program developed simulations of social systems as virtual world test beds. Not only were these simulations able to produce data on future states of the system under various circumstances and scenarios, but their causal ground truth was also explicitly known. Research teams studied these virtual worlds, facilitating deep validation of causal inference, prediction, and prescription methods. The Ground Truth program model provides a way to test and validate research methods to an extent previously impossible, and to study the intricacies and interactions of different components of research.

More Details

Arguments for the Generality and Effectiveness of “Discrete Direct” Model Calibration and Uncertainty Propagation vs. Other Calibration-UQ Approaches

AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2022

Romero, Vicente J.

This paper describes and analyzes the Discrete Direct (DD) model calibration and uncertainty propagation approach for computational models calibrated to data from sparse replicate tests of stochastically varying phenomena. The DD approach consists of generating and propagating discrete realizations of possible calibration parameter values corresponding to possible realizations of the uncertain inputs and outputs of the experiments. This is in contrast to model calibration methods that attempt to assign or infer continuous probability density functions for the calibration parameters. The DD approach straightforwardly accommodates aleatory variabilities and epistemic uncertainties (interval and/or probabilistically represented) in system properties and behaviors, in input initial and boundary conditions, and in measurement uncertainties of experimental inputs and outputs. In particular, the approach has several advantages over Bayesian and other calibration techniques in capturing and utilizing the information obtained from the typically small number of replicate experiments in model calibration situations, especially when sparse realizations of random function data like force-displacement curves from replicate material tests are used for calibration. The DD approach better preserves the fundamental information from the experimental data in a way that enables model predictions to be more directly tied to the supporting experimental data. The DD methodology is also simpler and typically less expensive than other established calibration-UQ approaches, is straightforward to implement, and is plausibly more reliably conservative and accurate for sparse-data calibration-UQ problems. The methodology is explained and analyzed in this paper under several regimes of model calibration and uncertainty propagation circumstances.

More Details

Discrete-Direct Model Calibration and Uncertainty Propagation Method Confirmed on Multi-Parameter Plasticity Model Calibrated to Sparse Random Field Data

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Romero, Vicente J.; Winokur, Justin W.; Orient, George E.; Dempsey, James F.

A discrete direct (DD) model calibration and uncertainty propagation approach is explained and demonstrated on a 4-parameter Johnson-Cook (J-C) strain-rate dependent material strength model for an aluminum alloy. The methodology’s performance is characterized in many trials involving four random realizations of strain-rate dependent material-test data curves per trial, drawn from a large synthetic population. The J-C model is calibrated to particular combinations of the data curves to obtain calibration parameter sets which are then propagated to “Can Crush” structural model predictions to produce samples of predicted response variability. These are processed with appropriate sparse-sample uncertainty quantification (UQ) methods to estimate various statistics of response with an appropriate level of conservatism. This is tested on 16 output quantities (von Mises stresses and equivalent plastic strains) and it is shown that important statistics of the true variabilities of the 16 quantities are bounded with a high success rate that is reasonably predictable and controllable. The DD approach has several advantages over other calibration-UQ approaches like Bayesian inference for capturing and utilizing the information obtained from typically small numbers of replicate experiments in model calibration situations—especially when sparse replicate functional data are involved like force–displacement curves from material tests. The DD methodology is straightforward and efficient for calibration and propagation problems involving aleatory and epistemic uncertainties in calibration experiments, models, and procedures.

More Details

Adaptive polynomial response surfaces and level-1 probability boxes for propagating and representing aleatory and epistemic components of uncertainty1

AIAA Scitech 2021 Forum

Romero, Vicente J.; Black, Amalia

When analyzing and predicting stochastic variability in a population of devices or systems, it is important to segregate epistemic lack-of-knowledge uncertainties and aleatory uncertainties due to stochastic variation in the population. This traditionally requires dual-loop Monte Carlo (MC) uncertainty propagation where the outer loop samples the epistemic uncertainties and for each realization, an inner loop samples and propagates the aleatory uncertainties. This results in various realizations of what the aleatory distribution of population response variability might be. Under certain conditions, the various possible realizations can be represented in a concise manner by approximate upper and lower bounding distributions of the same shape, composing a “Level 1” approximate probability box (L1 APbox). These are usually sufficient for model validation purposes, for example, and can be formed with substantially reduced computational cost and complication in propagating the aleatory and epistemic uncertainties (compared to dual-loop MC). Propagation cost can be further reduced by constructing and sampling response surface models that approximate the variation of physics-model output responses over the uncertainty parameter space. A simple dimension-and order-adaptive polynomial response surface approach is demonstrated for propagating the aleatory and epistemic uncertainties in a L1 APbox and for estimating the error contributed by using the surrogate model. Sensitivity analysis is also performed to quantify which uncertainty sources contribute most to the total aleatory-epistemic uncertainty in predicted response. The methodology is demonstrated as part of a model validation assessment involving thermal-chemical-mechanical response and weld breach failure of sealed canisters weakened by high temperatures and pressurized by heat-induced pyrolysis of foam.

More Details

Propagating and combining aleatory uncertainties characterized by continuous random variables and sparse discrete realizations from random functions

AIAA Scitech 2020 Forum

Romero, Vicente J.

This paper presents a practical methodology for propagating and combining the effects of random variations of several continuous scalar quantities and several random-function quantities affecting the failure pressure of a heated pressurized vessel. The random functions are associated with stress-strain curve test-to-test variability in replicate material strength tests (uniaxial tension tests) on nominally identical material specimens. It is demonstrated how to effectively propagate the curve-to-curve discrete variations and appropriately account for the small sample size of functional data realizations. This is coordinated with the propagation of aleatory variability described by uncertainty distributions for continuous scalar quantities of pressure-vessel wall thickness, weld depth, and thermal-contact factor. Motivated by the high expense of the pressure vessel simulations of heating, pressurization, and failure, a simple dimension-and order-adaptive polynomial response surface approach is used to propagate effects of the random variables and enable uncertainty estimates on the error contributed by using the surrogate model. Linear convolution is used to aggregate the resultant aleatory uncertainty from the parametrically propagated random variables with an appropriately conservative probability distribution of aleatory effects from propagating the multiple stress-strain curves for each material. The response surface constructions, Monte Carlo sampling of them for uncertainty propagation, and linear sensitivity analysis and convolution procedures, are demonstrated with standard EXCEL spreadsheet functions (no special software needed).

More Details

Approaches for quantifying uncertainties in computational modeling for aerospace applications

AIAA Scitech 2020 Forum

Schaefer, John; Leyde, Brian; Denham, Casey; Romero, Vicente J.; Schafer, Steven

In the past few decades, advancements in computing hardware and physical modeling capability have allowed computer models such as computational fluid dynamics to accelerate the development cycle of aerospace products. In general, model behavior is well-understood in the heart of the flight envelope, such as the cruise condition for a conventional commercial aircraft. Models have been well validated at these conditions, so the practice of running a single, deterministic solution to assess aircraft performance is sufficient for engineering purposes. However, the aerospace industry is beginning to apply models to configurations at the edge of the flight envelope. In this regime, uncertainty in the model due to its mathematical form, numerical behavior, or model parameters may become important. Uncertainty Quantification is the process of characterizing all major sources of uncertainty in the model and quantifying their effect on analysis outcomes. The goal of this paper is to survey modern uncertainty quantification methodologies and relate them to aerospace applications. Ultimately, uncertainty quantification enables modelers and simulation practitioners to make more informed statements about the uncertainty and associated degree of credibility of model-based predictions.

More Details

Bootstrapping and jackknife resampling to improve sparse-sample uq methods for tail probability estimation

ASME 2019 Verification and Validation Symposium, VVS 2019

Jekel, Charles F.; Romero, Vicente J.

Tolerance Interval Equivalent Normal (TI-EN) and Superdistribution (SD) sparse-sample uncertainty quantification (UQ) methods are used for conservative estimation of small tail probabilities. These methods are used to estimate the probability of a response laying beyond a specified threshold with limited data. The study focused on sparse-sample regimes ranging from N = 2 to 20 samples, because this is reflective of most experimental and some expensive computational situations. A tail probability magnitude of 10−4 was examined on four different distribution shapes, in order to be relevant for quantification of margins and uncertainty (QMU) problems that arise in risk and reliability analyses. In most cases the UQ methods were found to have optimal performance with a small number of samples, beyond which the performance deteriorated as samples were added. Using this observation, a generalized Jackknife resampling technique was developed to average many smaller subsamples. This improved the performance of the SD and TI-EN methods, specifically when a larger than optimal number of samples were available. A Complete Jackknifing technique, which considered all possible sub-sample combinations, was shown to perform better in most cases than an alternative Bootstrap resampling technique.

More Details

Simple effective conservative treatment of uncertainty from sparse samples of random functions

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems. Part B. Mechanical Engineering

Romero, Vicente J.; Schroeder, Benjamin B.; Dempsey, James F.; Lewis, John R.; Breivik, Nicole L.; Orient, George E.; Antoun, Bonnie R.; Winokur, Justin W.; Glickman, Matthew R.; Red-Horse, John R.

This paper examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a finite element model of a ductile steel can being slowly crushed. Over 140 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) are tracked in the simulations. Each response quantity’s behavior varies according to the particular stress-strain curves used for the materials in the model. We desire to estimate response variability when only a few stress-strain curve samples are available from material testing. Propagation of just a few samples will usually result in significantly underestimated response uncertainty relative to propagation of a much larger population that adequately samples the presiding random-function source. A simple classical statistical method, Tolerance Intervals, is tested for effectively treating sparse stress-strain curve data. The method is found to perform well on the highly nonlinear input-to-output response mappings and non-standard response distributions in the can-crush problem. The results and discussion in this paper support a proposition that the method will apply similarly well for other sparsely sampled random variable or function data, whether from experiments or models. Finally, the simple Tolerance Interval method is also demonstrated to be very economical.

More Details

Discrete-Direct Model Calibration and Propagation Approach Addressing Sparse Replicate Tests and Material, Geometric, and Measurement Uncertainties

SAE Technical Papers

Romero, Vicente J.

This paper introduces the "Discrete Direct" (DD) model calibration and uncertainty propagation approach for computational models calibrated to data from sparse replicate tests of stochastically varying systems. The DD approach generates and propagates various discrete realizations of possible calibration parameter values corresponding to possible realizations of the uncertain inputs and outputs of the experiments. This is in contrast to model calibration methods that attempt to assign or infer continuous probability density functions for the calibration parameters-which adds unjustified information to the calibration and propagation problem. The DD approach straightforwardly accommodates aleatory variabilities and epistemic uncertainties in system properties and behaviors, in input initial and boundary conditions, and in measurement uncertainties in the experiments. The approach appears to have several advantages over Bayesian and other calibration approaches for capturing and utilizing the information obtained from the typically small number of experiments in model calibration situations. In particular, the DD methodology better preserves the fundamental information from the experimental data in a way that enables model predictions to be more directly traced back to the supporting experimental data. The approach is also presently more viable for calibration involving sparse realizations of random function data (e.g. stress-strain curves) and random field data. The DD methodology is conceptually simpler than Bayesian calibration approaches, and is straightforward to implement. The methodology is demonstrated and analyzed in this paper on several illustrative calibration and uncertainty propagation problems.

More Details

Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

Romero, Vicente J.; Bonney, Matthew S.; Schroeder, Benjamin B.; Weirs, Vincent G.

When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

More Details

Validation Assessment of a Glass-to-Metal Seal Finite-Element Model

Jamison, Ryan D.; Buchheit, Thomas E.; Emery, John M.; Romero, Vicente J.; Stavig, Mark E.; Newton, Clay S.; Brown, Arthur B.

Sealing glasses are ubiquitous in high pressure and temperature engineering applications, such as hermetic feed-through electrical connectors. A common connector technology are glass-to-metal seals where a metal shell compresses a sealing glass to create a hermetic seal. Though finite-element analysis has been used to understand and design glass-to-metal seals for many years, there has been little validation of these models. An indentation technique was employed to measure the residual stress on the surface of a simple glass-to-metal seal. Recently developed rate- dependent material models of both Schott 8061 and 304L VAR stainless steel have been applied to a finite-element model of the simple glass-to-metal seal. Model predictions of residual stress based on the evolution of material models are shown. These model predictions are compared to measured data. Validity of the finite- element predictions is discussed. It will be shown that the finite-element model of the glass-to-metal seal accurately predicts the mean residual stress in the glass near the glass-to-metal interface and is valid for this quantity of interest.

More Details

Applicability Analysis of Validation Evidence for Biomedical Computational Models

Journal of Verification, Validation and Uncertainty Quantification

Pathmanathan, Pras P.; Gray, Richard A.; Romero, Vicente J.; Morrison, Tina M.

Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibility of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. In this paper, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. Finally, the proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.

More Details

Analyst-to-Analyst Variability in Simulation-Based Prediction

Glickman, Matthew R.; Romero, Vicente J.

This report describes findings from the culminating experiment of the LDRD project entitled, "Analyst-to-Analyst Variability in Simulation-Based Prediction". For this experiment, volunteer participants solving a given test problem in engineering and statistics were interviewed at different points in their solution process. These interviews are used to trace differing solutions to differing solution processes, and differing processes to differences in reasoning, assumptions, and judgments. The issue that the experiment was designed to illuminate -- our paucity of understanding of the ways in which humans themselves have an impact on predictions derived from complex computational simulations -- is a challenging and open one. Although solution of the test problem by analyst participants in this experiment has taken much more time than originally anticipated, and is continuing past the end of this LDRD, this project has provided a rare opportunity to explore analyst-to-analyst variability in significant depth, from which we derive evidence-based insights to guide further explorations in this important area.

More Details

POF-Darts: Geometric adaptive sampling for probability of failure

Reliability Engineering and System Safety

Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.

We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink, improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. We present various examples to demonstrate the efficiency of our novel approach.

More Details

Can-crush model and simulations for verifying uncertainty quantification method for sparse stress-strain curve data

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Dempsey, James F.; Romero, Vicente J.; Breivik, Nicole L.; Orient, G.; Antoun, Bonnie R.; Schroeder, Benjamin B.; Lewis, John R.; Winokur, Justin W.

This work examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a transient dynamics finite element model of a ductile steel can being slowly crushed. An elastic-plastic constitutive model is employed in the large-deformation simulations. The present work assigns the same material to all the can parts: lids, walls, and weld. Time histories of 18 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) at several locations on the can and various points in time are monitored in the simulations. Each response quantity's behavior varies according to the particular stressstrain curves used for the materials in the model. We estimate response variability due to variability of the input material curves. When only a few stress-strain curves are available from material testing, response variance will usually be significantly underestimated. This is undesirable for many engineering purposes. This paper describes the can-crush model and simulations used to evaluate a simple classical statistical method, Tolerance Intervals (TIs), for effectively compensating for sparse stress-strain curve data in the can-crush problem. Using the simulation results presented here, the accuracy and reliability of the TI method are being evaluated on the highly nonlinear inputto- output response mappings and non-standard response distributions in the can-crush UQ problem.

More Details

Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

Mitchell, Scott A.; Ebeida, Mohamed S.; Romero, Vicente J.; Swiler, Laura P.; Rushdi, Ahmad A.; Abdelkader, Ahmad A.

This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

More Details

UQ and V&V techniques applied to experiments and simulations of heated pipes pressurized to failure

Romero, Vicente J.; Dempsey, James F.; Antoun, Bonnie R.

This report demonstrates versatile and practical model validation and uncertainty quantification techniques applied to the accuracy assessment of a computational model of heated steel pipes pressurized to failure. The Real Space validation methodology segregates aleatory and epistemic uncertainties to form straightforward model validation metrics especially suited for assessing models to be used in the analysis of performance and safety margins. The methodology handles difficulties associated with representing and propagating interval and/or probabilistic uncertainties from multiple correlated and uncorrelated sources in the experiments and simulations including: material variability characterized by non-parametric random functions (discrete temperature dependent stress-strain curves); very limited (sparse) experimental data at the coupon testing level for material characterization and at the pipe-test validation level; boundary condition reconstruction uncertainties from spatially sparse sensor data; normalization of pipe experimental responses for measured input-condition differences among tests and for random and systematic uncertainties in measurement/processing/inference of experimental inputs and outputs; numerical solution uncertainty from model discretization and solver effects.

More Details

Modeling and simulation in validation assessment of failure predictions for high temperature pressurized pipes

Conference Proceedings of the Society for Experimental Mechanics Series

Dempsey, J.F.; Romero, Vicente J.; Antoun, Bonnie R.

A unique quasi-static temperature dependent low strain rate finite element constitutive failure model has been developed at Sandia National Laboratories (Dempsey JF, Antoun B, Wellman G, Romero V, Scherzinger W (2010) Coupled thermal pressurization failure simulations with validation experiments. Presentation at ASME 2010 international mechanical engineering congress & exposition, Vancouver, 12-18 Nov 2010) and is being to be used to predict failure initiation of pressurized components at high temperature. In order to assess the accuracy of this constitutive model, validation experiments of a cylindrical stainless steel pipe, heated and pressurized to failure is performed. This "pipe bomb" is instrumented with thermocouples and a pressure sensor whereby temperatures and pressure are recorded with time until failure occurs. The pressure and thermocouple temperatures are then mapped to a finite element model of this pipe bomb. Mesh refinement and temperature mapping impacts on failure pressure prediction in support of the model validation assessment is discussed. © The Society for Experimental Mechanics Inc. 2014.

More Details

A comparison of methods for representing sparsely sampled random quantities

Romero, Vicente J.; Swiler, Laura P.; Urbina, Angel U.

This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

More Details

An initial comparison of methods for representing and aggregating experimental uncertainties involving sparse data

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Romero, Vicente J.; Swiler, Laura P.; Urbina, Angel U.

This paper discusses the handling and treatment of uncertainties corresponding to relatively few data samples in experimental characterization of random quantities. The importance of this topic extends beyond experimental uncertainty to situations where the derived experimental information is used for model validation or calibration. With very sparse data it is not practical to have a goal of accurately estimating the underlying variability distribution (probability density function, PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a desired percentage of the actual PDF, say 95% included probability, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the random-variable range corresponding to the desired percentage of the actual PDF. The performance of a variety of uncertainty representation techniques is tested and characterized in this paper according to these two opposing objectives. An initial set of test problems and results is presented here from a larger study currently underway.

More Details

Some statistical procedures to refine estimates of uncertainty when sparse data are available for model validation and calibration

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Romero, Vicente J.; Rutherford, Brian M.; Newcomer, Justin T.

This paper presents some statistical concepts and techniques for refining the expression of uncertainty arising from: a) random variability (aleatory uncertainty) of a random quantity; and b) contributed epistemic uncertainty due to limited sampling of the random quantity. The treatment is tailored to handling experimental uncertainty in a context of model validation and calibration. Two particular problems are considered. One involves deconvolving random measurement error from measured random response. The other involves exploiting a relationship between two random variates of a system and an independently characterized probability density of one of the variates.

More Details

Elements of a pragmatic approach for dealing with bias and uncertainty in experiments through predictions : experiment design and data conditioning; %22real space%22 model validation and conditioning; hierarchical modeling and extrapolative prediction

Romero, Vicente J.

This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.

More Details

Comparison of Several Model Validation Conceptions against a "Real Space" End-to-End Approach

SAE International Journal of Materials and Manufacturing

Romero, Vicente J.

This paper1 explores some of the important considerations in devising a practical and consistent framework and methodology for working with experiments and experimental data in connection with modeling and prediction. The paper outlines a pragmatic and versatile "real-space" approach within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in modeling and prediction. The elements of data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. The considerations and options are many, and a large variety of viewpoints and precedents exist in the literature, as surveyed here. Rationale is given for the various choices taken in assembling the novel real-space end-to-end framework. The framework adopts some elements and constructs from the literature (sometimes adding needed refinement), rejects others (even some currently popular ones), and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various categories of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, structural mechanics, irradiated electronics, and combustion in fluids and solids.2.

More Details

Data & model conditioning for multivariate systematic uncertainty in model calibration, validation, and extrapolation

Romero, Vicente J.

This paper discusses implications and appropriate treatment of systematic uncertainty in experiments and modeling. Systematic uncertainty exists when experimental conditions, and/or measurement bias errors, and/or bias contributed by post-processing the data, are constant over the set of experiments but the particular values of the conditions and/or biases are unknown to within some specified uncertainty. Systematic uncertainties in experiments do not automatically show up in the output data, unlike random uncertainty which is revealed when multiple experiments are performed. Therefore, the output data must be properly 'conditioned' to reflect important sources of systematic uncertainty in the experiments. In industrial scale experiments the systematic uncertainty in experimental conditions (especially boundary conditions) is often large enough that the inference error on how the experimental system maps inputs to outputs is often quite substantial. Any such inference error and uncertainty thereof also has implications in model validation and calibration/conditioning; ignoring systematic uncertainty in experiments can lead to 'Type X' error in these procedures. Apart from any considerations of modeling and simulation, reporting of uncertainty associated with experimental results should include the effects of any significant systematic uncertainties in the experiments. This paper describes and illustrates the treatment of multivariate systematic uncertainties of interval and/or probabilistic natures, and combined cases. The paper also outlines a practical and versatile 'real-space' framework and methodology within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in model validation, calibration/conditioning, hierarchical modeling, and extrapolative prediction.

More Details

Coupled thermal-mechanical experiments for validation of pressurized, high temperature systems

Dempsey, James F.; Wellman, Gerald W.; Scherzinger, William M.; Connelly, Kevin C.; Romero, Vicente J.

Instrumented, fully coupled thermal-mechanical experiments were conducted to provide validation data for finite element simulations of failure in pressurized, high temperature systems. The design and implementation of the experimental methodology is described in another paper of this conference. Experimental coupling was accomplished on tubular 304L stainless steel specimens by mechanical loading imparted by internal pressurization and thermal loading by side radiant heating. Experimental parameters, including temperature and pressurization ramp rates, maximum temperature and pressure, phasing of the thermal and mechanical loading and specimen geometry details were studied. Experiments were conducted to increasing degrees of deformation, up to and including failure. Mechanical characterization experiments of the 304L stainless steel tube material was also completed for development of a thermal elastic-plastic material constitutive model used in the finite element simulations of the validation experiments. The material was characterized in tension at a strain rate of 0.001/s from room temperature to 800 C. The tensile behavior of the tube material was found to differ substantially from 304L bar stock material, with the plasticity characteristics and strain to failure differing at every test temperature.

More Details

Application of a pragmatic interval-based "real space" approach to fire-model validation involving aleatory and epistemic uncertainty

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Romero, Vicente J.; Luketa, Anay L.; Sherman, Martin

This paper applies a pragmatic interval-based approach to validation of a fire dynamics model involving computational fluid dynamics, combustion, participating-media radiation, and heat transfer. Significant aleatory and epistemic sources of uncertainty exist in the experiments and simulations. The validation comparison of experimental and simulation results, and corresponding criteria and procedures for model affirmation or refutation, take place in "real space" as opposed to "difference space" where subtractive differences between experiments and simulations are assessed. The versatile model validation framework handles difficulties associated with representing and aggregating aleatory and epistemic uncertainties from multiple correlated and uncorrelated source types, including: • experimental variability from multiple repeat experiments • uncertainty of experimental inputs • experimental output measurement uncertainties • uncertainties that arise in data processing and inference from raw simulation and experiment outputs • parameter and model-form uncertainties intrinsic to the model • numerical solution uncertainty from model discretization effects. The framework and procedures of the model validation methodology are here applied to a difficult validation problem involving experimental and predicted calorimeter temperatures in a wind-driven hydrocarbon pool fire.

More Details

Validation and uncertainty quantification of Fuego simulations of calorimeter heating in a wind-driven hydrocarbon pool fire

Luketa, Anay L.; Romero, Vicente J.; Domino, Stefan P.; Glaze, D.J.; Figueroa Faria, Victor G.

The objective of this work is to perform an uncertainty quantification (UQ) and model validation analysis of simulations of tests in the cross-wind test facility (XTF) at Sandia National Laboratories. In these tests, a calorimeter was subjected to a fire and the thermal response was measured via thermocouples. The UQ and validation analysis pertains to the experimental and predicted thermal response of the calorimeter. The calculations were performed using Sierra/Fuego/Syrinx/Calore, an Advanced Simulation and Computing (ASC) code capable of predicting object thermal response to a fire environment. Based on the validation results at eight diversely representative TC locations on the calorimeter the predicted calorimeter temperatures effectively bound the experimental temperatures. This post-validates Sandia's first integrated use of fire modeling with thermal response modeling and associated uncertainty estimates in an abnormal-thermal QMU analysis.

More Details

Efficiencies from spatially-correlated uncertainty and sampling in continuous-variable ordinal optimization

SAE International Journal of Materials and Manufacturing

Romero, Vicente J.

A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effect. One simply asks "Is that alternative better or worse than this one?"-not "HOW MUCH better or worse is that alternative to this one?" The answer to the latter question requires precise characterization of the uncertainty- with the corresponding sampling/integration expense for precise resolution. By looking at things from an ordinal ranking perspective instead, the trade-off between computational expense and vagueness in the uncertainty characterization can be managed to make cost-effective stepping decisions in the design space. This paper demonstrates correct advancement in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. It is explained and shown how spatial correlation of uncertainty in such design problems can be exploited to dramatically increase the efficiency of ordinal approaches to optimization under uncertainty.

More Details

Error estimation approaches for progressive response surfaces -more results

Conference Proceedings of the Society for Experimental Mechanics Series

Romero, Vicente J.; Slepoy, R.; Swiler, Laura P.; Giunta, A.A.; Krishnamurthy, T.

Response surface functions are often used as simple and inexpensive replacements for computationally expensive computer models that simulate the behavior of a complex system over some parameter space. "Progressive" response surfaces are built up incrementally as global information is added from new sample points added to the previous points in the parameter space. As the response surfaces are globally upgraded, indicators of the convergence of the response surface approximation to the exact (fitted) function can be inferred. Sampling points can be incrementally added in a structured or unstructured fashion. Whatever the approach, it is usually desirable to sample the entire parameter space uniformly (at least in early stages of sampling). At later stages of sampling, depending on the nature of the quantity being resolved, it may be desirable to continue sampling uniformly (progressive response surfaces), or to switch to a focusing/economizing strategy of preferentially sampling certain regions of the parameter space based on information gained in previous stages of sampling ("adaptive" response surfaces). Here we consider progressive response surfaces where a balanced representation of global response over the parameter space is desired. We use Kriging and Moving-Least-Squares methods to fit Halton quasi-Monte-Carlo data samples and interpolate over the parameter space. On 2-D test problems we use the response surfaces to compute various response measures and assess the accuracy/applicability of heuristic error estimates based on convergence behavior of the computed response quantities. Where applicable we apply Richardson Extrapolation for estimates of error, and assess the accuracy of these estimates. We seek to develop a robust methodology for constructing progressive response surface approximations with reliable error estimates.

More Details

Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization

Romero, Vicente J.

A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

More Details

Advanced nuclear energy analysis technology

Young, Michael F.; Murata, Kenneth K.; Romero, Vicente J.; Gauntt, Randall O.; Rochau, Gary E.

A two-year effort focused on applying ASCI technology developed for the analysis of weapons systems to the state-of-the-art accident analysis of a nuclear reactor system was proposed. The Sandia SIERRA parallel computing platform for ASCI codes includes high-fidelity thermal, fluids, and structural codes whose coupling through SIERRA can be specifically tailored to the particular problem at hand to analyze complex multiphysics problems. Presently, however, the suite lacks several physics modules unique to the analysis of nuclear reactors. The NRC MELCOR code, not presently part of SIERRA, was developed to analyze severe accidents in present-technology reactor systems. We attempted to: (1) evaluate the SIERRA code suite for its current applicability to the analysis of next generation nuclear reactors, and the feasibility of implementing MELCOR models into the SIERRA suite, (2) examine the possibility of augmenting ASCI codes or alternatives by coupling to the MELCOR code, or portions thereof, to address physics particular to nuclear reactor issues, especially those facing next generation reactor designs, and (3) apply the coupled code set to a demonstration problem involving a nuclear reactor system. We were successful in completing the first two in sufficient detail to determine that an extensive demonstration problem was not feasible at this time. In the future, completion of this research would demonstrate the feasibility of performing high fidelity and rapid analyses of safety and design issues needed to support the development of next generation power reactor systems.

More Details

Application of probabilistic ordinal optimization concepts to a continuous-variable probabilistic optimization problem

Romero, Vicente J.; Romero, Vicente J.

A very general and robust approach to solving optimization problems involving probabilistic uncertainty is through the use of Probabilistic Ordinal Optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the probabilistic merits of local design alternatives, rather than on crisp quantification of the alternatives. Thus, we simply ask the question: 'Is that alternative better or worse than this one?' to some level of statistical confidence we require, not: 'HOW MUCH better or worse is that alternative to this one?'. In this paper we illustrate an elementary application of probabilistic ordinal concepts in a 2-D optimization problem. Two uncertain variables contribute to uncertainty in the response function. We use a simple Coordinate Pattern Search non-gradient-based optimizer to step toward the statistical optimum in the design space. We also discuss more sophisticated implementations, and some of the advantages and disadvantages versus non-ordinal approaches for optimization under uncertainty.

More Details

Initial evaluation of Centroidal Voronoi Tessellation method for statistical sampling and function integration

Romero, Vicente J.; Romero, Vicente J.; Gunzburger, Max D.

A recently developed Centroidal Voronoi Tessellation (CVT) unstructured sampling method is investigated here to assess its suitability for use in statistical sampling and function integration. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-Dimensional parameter spaces. It has recently been shown on several 2-D test problems to provide superior point distributions for generating locally conforming response surfaces. In this paper, its performance as a statistical sampling and function integration method is compared to that of Latin-Hypercube Sampling (LHS) and Simple Random Sampling (SRS) Monte Carlo methods, and Halton and Hammersley quasi-Monte-Carlo sequence methods. Specifically, sampling efficiencies are compared for function integration and for resolving various statistics of response in a 2-D test problem. It is found that on balance CVT performs best of all these sampling methods on our test problems.

More Details

Initial application and evaluation of a promising new sampling method for response surface generation: Centroidal Voronoi tessellation

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Romero, Vicente J.; Burkardt, J.; Gunzburger, M.; Peterson, J.; Krishnamurthy, T.

A recently developed Centroidal Voronoi Tessellation (CVT) sampling method is investigated here to assess its suitability for use in response surface generation. CVT is an unstructured sampling method that can generate nearly uniform point spacing over arbitrarily shaped M-dimensional parameter spaces. For rectangular parameter spaces (hypercubes), CVT appears to extend to higher dimensions more effectively and inexpensively than "Distributed" and "Improved Distributed" Latin Hypercube Monte Carlo methods, and CVT does not appear to suffer from spurious correlation effects in higher dimensions and at high sampling densities as quasi-Monte-Carlo methods such as Halton and Sobol sequences typically do. CVT is described briefly in this paper and its impact on response surface accuracy in a 2-D test problem is compared to the accuracy yielded by Latin Hypercube Sampling (LHS) and a deterministic structured-uniform sampling method. To accommodate the different point patterns over the parameter space given by the different sampling methods, Moving Least Squares (MLS) for interpolation of arbitrarily located data points is used. It is found that CVT performs better than LHS in 11 of 12 test cases investigated here, and as often as not performs better than the structured sampling method with its deterministically uniform point placement over the 2-D parameter space.

More Details

Description of the Sandia Validation Metrics Project

Trucano, Timothy G.; Easterling, Robert G.; Dowding, Kevin J.; Paez, Thomas L.; Urbina, Angel U.; Romero, Vicente J.; Rutherford, Brian M.; Hills, Richard G.

This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.

More Details

Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

Romero, Vicente J.

In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

More Details

Application of finite element, global polynomial, and kriging response surfaces in Progressive Lattice Sampling designs

Romero, Vicente J.; Swiler, Laura P.; Giunta, Anthony A.

This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.

More Details
126 Results
126 Results