This report describes the credibility activities undertaken in support of Gemma code development in FY20, which include Verification & Validation (V&V), Uncertainty Quantification (UQ), and Credibility process application. The main goal of these activities is to establish capabilities and process frameworks that can be more broadly applied to new and more advanced problems as the Gemma code development effort matures. This will provide Gemma developers and analysts with the tools needed to generate credibility evidence in support of Gemma predictions for future use cases. The FY20 Gemma V&V/UQ/Credibility activities described in this report include experimental uncertainty analysis, the development and use of methods for optimal design of computer experiments, and the development of a framework for validation. These initial activities supported the development of broader credibility planning for Gemma that continued into FY21.
In microcircuit fabrication, the diameter and length of a bond wire have been shown to both affect the current versus fusing time ratio of a bond wire as well as the gap length of the fused wire. This study investigated the impact of current level on the time-to-open and gap length of 1 mil by 60 mil gold bond wires. During the experiments, constant current was provided for a control set of bond wires for 250ms, 410ms and until the wire fused; non-destructively pull-tested wires for 250ms; and notched wires. The key findings were that as the current increases, the gap length increases and 73% of the bond wires will fuse at 1.8A, and 100% of the wires fuse at 1.9A within 60ms. Due to the limited scope of experiments and limited data analyzed, further investigation is encouraged to confirm these observations.
In this paper, fusing of a metallic conductor is studied by judiciously using the solution of the one-dimensional heat equation, resulting in an approximate method for determining the threshold fusing current. The action is defined as an integration of the square of the wire current over time. The burst action (the action required to completely vaporize the material) for an exploding wire is then used to estimate the typical wire gapping action (involving wire fusing), from which gapping time can be estimated for a gapping current greater than a factor of two over the fusing current. The test data are used to determine the gapped length as a function of gapping current and to show, for a limited range, that the gapped length is inversely proportional to gapping time. The gapping length can be used as a signature of the fault current level in microelectronic circuits.
This paper presents some statistical concepts and techniques for refining the expression of uncertainty arising from: a) random variability (aleatory uncertainty) of a random quantity; and b) contributed epistemic uncertainty due to limited sampling of the random quantity. The treatment is tailored to handling experimental uncertainty in a context of model validation and calibration. Two particular problems are considered. One involves deconvolving random measurement error from measured random response. The other involves exploiting a relationship between two random variates of a system and an independently characterized probability density of one of the variates.
A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.
The general problem considered is an optimization problem involving product design where some initial data are available and computer simulation is to be used to obtain more information. Resources and system complexity together restrict the number of simulations that can be performed in search of optimal settings for the product parameters. Consequently levels of these parameters, used in the simulations, (the experimental design) must be selected in an efficient way. We describe an algorithmic 'response-modeling' approach for performing this selection. The algorithm is illustrated using a rolamite design application. We provide (as examples) optimal one, two and three-point experimental designs for the rolamite computational analyses.
Computational simulation methods have advanced to a point where simulation can contribute substantially in many areas of systems analysis. One research challenge that has accompanied this transition involves the characterization of uncertainty in both computer model inputs and the resulting system response. This article addresses a subset of the 'challenge problems' posed in [Challenge problems: uncertainty in system response given uncertain parameters, 2001] where uncertainty or information is specified over intervals of the input parameters and inferences based on the response are required. The emphasis of the article is to describe and illustrate a method for performing tasks associated with this type of modeling 'economically'-requiring relatively few evaluations of the system to get a precise estimate of the response. This 'response-modeling approach' is used to approximate a probability distribution for the system response. The distribution is then used: (1) to make inferences concerning probabilities associated with response intervals and (2) to guide in determining further, informative, system evaluations to perform.
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concerning what was needed for this aspect of the analysis. The resulting predictions and corresponding uncertainty assessment demonstrate the flexibility of this approach.
This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Investigation and evaluation of a complex system is often accomplished through the use of performance measures based on system response models. The response models are constructed using computer-generated responses supported where possible by physical test results. The general problem considered is one where resources and system complexity together restrict the number of simulations that can be performed. The levels of input variables used in defining environmental scenarios, initial and boundary conditions and for setting system parameters must be selected in an efficient way. This report describes an algorithmic approach for performing this selection.