Publications

Results 26–49 of 49
Skip to search filters

Ideas underlying quantification of margins and uncertainties(QMU): a white paper

Pilch, Martin P.; Trucano, Timothy G.

This report describes key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions at Sandia National Laboratories. While QMU is a broad process and methodology for generating critical technical information to be used in stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, we discuss the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, the need to separate aleatory and epistemic uncertainty in QMU, and the risk-informed decision making that is best suited for decisive application of QMU. The paper is written at a high level, but provides a systematic bibliography of useful papers for the interested reader to deepen their understanding of these ideas.

More Details

The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions

Pilch, Martin P.

Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities and uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.

More Details

Case study for model validation : assessing a model for thermal decomposition of polyurethane foam

Dowding, Kevin J.; Pilch, Martin P.; Rutherford, Brian M.; Hobbs, Michael L.

A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

More Details

On the role of code comparisons in verification and validation

Trucano, Timothy G.; Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes.

More Details

Level 1 Peer Review Process for the Sandia ASCI V and V Program: FY01 Final Report

Pilch, Martin P.; Froehlich, G.K.; Hodges, Ann L.; Peercy, David E.; Trucano, Timothy G.; Moya, Jaime L.; Peercy, David E.

This report describes the results of the FY01 Level 1 Peer Reviews for the Verification and Validation (V&V) Program at Sandia National Laboratories. V&V peer review at Sandia is intended to assess the ASCI (Accelerated Strategic Computing Initiative) code team V&V planning process and execution. The Level 1 Peer Review process is conducted in accordance with the process defined in SAND2000-3099. V&V Plans are developed in accordance with the guidelines defined in SAND2000-3 101. The peer review process and process for improving the Guidelines are necessarily synchronized and form parts of a larger quality improvement process supporting the ASCI V&V program at Sandia. During FY00 a prototype of the process was conducted for two code teams and their V&V Plans and the process and guidelines updated based on the prototype. In FY01, Level 1 Peer Reviews were conducted on an additional eleven code teams and their respective V&V Plans. This report summarizes the results from those peer reviews, including recommendations from the panels that conducted the reviews.

More Details

General Concepts for Experimental Validation of ASCI Code Applications

Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.

More Details

Peer Review Process for the Sandia ASCI V and V Program: Version 1.0

Pilch, Martin P.; Trucano, Timothy G.; Peercy, David E.; Hodges, Ann L.; Young, Eunice R.; Moya, Jaime L.; Trucano, Timothy G.

This report describes the initial definition of the Verification and Validation (V and V) Plan Peer Review Process at Sandia National Laboratories. V and V peer review at Sandia is intended to assess the ASCI code team V and V planning process and execution. Our peer review definition is designed to assess the V and V planning process in terms of the content specified by the Sandia Guidelines for V and V plans. Therefore, the peer review process and process for improving the Guidelines are necessarily synchronized, and form parts of a larger quality improvement process supporting the ASCI V and V program at Sandia.

More Details
Results 26–49 of 49
Results 26–49 of 49