Publications

3 Results
Skip to search filters

Description of the Sandia Validation Metrics Project

Trucano, Timothy G.; Easterling, Robert G.; Dowding, Kevin J.; Paez, Thomas L.; Urbina, Angel U.; Romero, Vicente J.; Rutherford, Brian M.; Hills, Richard G.

This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.

More Details

Measuring the Predictive Capability of Computational Models: Principles and Methods, Issues and Illustrations

Easterling, Robert G.

It is critically important, for the sake of credible computational predictions, that model-validation experiments be designed, conducted, and analyzed in ways that provide for measuring predictive capability. I first develop a conceptual framework for designing and conducting a suite of physical experiments and calculations (ranging from phenomenological to integral levels), then analyzing the results first to (statistically) measure predictive capability in the experimental situations then to provide a basis for inferring the uncertainty of a computational-model prediction of system or component performance in an application environment or configuration that cannot or will not be tested. Several attendant issues are discussed in general, then illustrated via a simple linear model and a shock physics example. The primary messages I wish to convey are: (1) The only way to measure predictive capability is via suites of experiments and corresponding computations in testable environments and configurations; (2) Any measurement of predictive capability is a function of experimental data and hence is statistical in nature; (3) A critical inferential link is required to connect observed prediction errors in experimental contexts to bounds on prediction errors in untested applications. Such a connection may require extrapolating both the computational model and the observed extra-model variability (the prediction errors: nature minus model); (4) Model validation is not binary. Passing a validation test does not mean that the model can be used as a surrogate for nature; (5) Model validation experiments should be designed and conducted in ways that permit a realistic estimate of prediction errors, or extra-model variability, in application environments; (6) Code uncertainty-propagation analyses do not (and cannot) characterize prediction error (nature vs. computational prediction); (7) There are trade-offs between model complexity and the ability to measure a computer model's predictive capability that need to be addressed in any particular application; and (8) Adequate quantification of predictive capability, even in greatly simplified situations, can require a substantial number of model-validation experiments.

More Details
3 Results
3 Results