Publications

99 Results
Skip to search filters

Advanced Technology and Mitigation (ATDM) SPARC Re-Entry Code Fiscal Year 2017 Progress and Accomplishments for ECP

Crozier, Paul C.; Howard, Micah A.; Rider, William J.; Freno, Brian A.; Bova, S.W.; Carnes, Brian C.

The SPARC (Sandia Parallel Aerodynamics and Reentry Code) will provide nuclear weapon qualification evidence for the random vibration and thermal environments created by re-entry of a warhead into the earth’s atmosphere. SPARC incorporates the innovative approaches of ATDM projects on several fronts including: effective harnessing of heterogeneous compute nodes using Kokkos, exascale-ready parallel scalability through asynchronous multi-tasking, uncertainty quantification through Sacado integration, implementation of state-of-the-art reentry physics and multiscale models, use of advanced verification and validation methods, and enabling of improved workflows for users. SPARC is being developed primarily for the Department of Energy nuclear weapon program, with additional development and use of the code is being supported by the Department of Defense for conventional weapons programs.

More Details

Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

Handbook of Uncertainty Quantification

Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent A.

The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). Several critical activities follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. The subject matter is introduced for general applications but specifics are given for the failure prediction project. The first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

More Details

High Fidelity Coupling Methods for Blast Response on Thin Shell Structures

Thomas, Jesse D.; Ruggirello, Kevin P.; Love, Edward L.; Rider, William J.; Heinstein, Martin W.

Computational simulation of structures subjected to blast loadings requires integration of computational shock-physics for blast, and structural response with potential for pervasive failure. Current methodologies for this problem space are problematic in terms of e ffi ciency and solution quality. This report details the development of several coupling algorithms for thin shells, with an emphasis on rigorous verification where possible and comparisons to existing methodologies in use at Sandia.

More Details

Verification Validation and Uncertainty Quantification for CGS

Sandia journal manuscript; Not yet accepted for publication

Rider, William J.; James, R.K.; Weirs, Vincent G.

The overall conduct of verification, validation and uncertainty quantification (VVUQ) is discussed through the construction of a workflow relevant to computational modeling including the turbulence problem in the coarse grained simulation (CGS) approach. The workflow contained herein is defined at a high level and constitutes an overview of the activity. Nonetheless, the workflow represents an essential activity in predictive simulation and modeling. VVUQ is complex and necessarily hierarchical in nature. The particular characteristics of VVUQ elements depend upon where the VVUQ activity takes place in the overall hierarchy of physics and models. In this chapter, we focus on the differences between and interplay among validation, calibration and UQ, as well as the difference between UQ and sensitivity analysis. The discussion in this chapter is at a relatively high level and attempts to explain the key issues associated with the overall conduct of VVUQ. The intention is that computational physicists can refer to this chapter for guidance regarding how VVUQ analyses fit into their efforts toward conducting predictive calculations.

More Details

ALEGRA Update: Modernization and Resilience Progress

Robinson, Allen C.; Petney, Sharon P.; Drake, Richard R.; Weirs, Vincent G.; Adams, Brian M.; Vigil, Dena V.; Carpenter, John H.; Garasi, Christopher J.; Wong, Michael K.; Robbins, Joshua R.; Siefert, Christopher S.; Strack, Otto E.; Wills, Ann E.; Trucano, Timothy G.; Bochev, Pavel B.; Summers, Randall M.; Stewart, James R.; Ober, Curtis C.; Rider, William J.; Haill, Thomas A.; Lemke, Raymond W.; Cochrane, Kyle C.; Desjarlais, Michael P.; Love, Edward L.; Voth, Thomas E.; Mosso, Stewart J.; Niederhaus, John H.

Abstract not provided.

Development of a fourth generation predictive capability maturity model

Hills, Richard G.; Witkowski, Walter R.; Rider, William J.; Trucano, Timothy G.; Urbina, Angel U.

The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

More Details

Fundamental issues in the representation and propagation of uncertain equation of state information in shock hydrodynamics

Computers and Fluids

Robinson, Allen C.; Berry, Robert D.; Carpenter, John H.; Debusschere, Bert D.; Drake, Richard R.; Mattsson, A.E.; Rider, William J.

Uncertainty quantification (UQ) deals with providing reasonable estimates of the uncertainties associated with an engineering model and propagating them to final engineering quantities of interest. We present a conceptual UQ framework for the case of shock hydrodynamics with Euler's equations where the uncertainties are assumed to lie principally in the equation of state (EOS). In this paper we consider experimental data as providing both data and an estimate of data uncertainty. We propose a specific Bayesian inference approach for characterizing EOS uncertainty in thermodynamic phase space. We show how this approach provides a natural and efficient methodology for transferring data uncertainty to engineering outputs through an EOS representation that understands and deals consistently with parameter correlations as sensed in the data.Historically, complex multiphase EOSs have been built utilizing tables as the delivery mechanism in order to amortize the cost of creation of the tables over many subsequent continuum scale runs. Once UQ enters into the picture, however, the proper operational paradigm for multiphase tables become much less clear. Using a simple single-phase Mie-Grüneisen model we experiment with several approaches and demonstrate how uncertainty can be represented. We also show how the quality of the tabular representation is of key importance. As a first step, we demonstrate a particular tabular approach for the Mie-Grüneisen model which when extended to multiphase tables should have value for designing a UQ-enabled shock hydrodynamic modeling approach that is not only theoretically sound but also robust, useful, and acceptable to the modeling community. We also propose an approach to separate data uncertainty from modeling error in the EOS. © 2012 Elsevier Ltd.

More Details

Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

Reliability Engineering and System Safety

Weirs, V.G.; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. © 2011 Elsevier Ltd. All rights reserved.

More Details

A new pressure relaxation closure model for two%3CU%2B2010%3Ematerial lagrangian hydrodynamics

Kamm, James R.; Rider, William J.

We present a new model for closing a system of Lagrangian hydrodynamics equations for a two-material cell with a single velocity model. We describe a new approach that is motivated by earlier work of Delov and Sadchikov and of Goncharov and Yanilkin. Using a linearized Riemann problem to initialize volume fraction changes, we require that each material satisfy its own pdV equation, which breaks the overall energy balance in the mixed cell. To enforce this balance, we redistribute the energy discrepancy by assuming that the corresponding pressure change in each material is equal. This multiple-material model is packaged as part of a two-step time integration scheme. We compare results of our approach with other models and with corresponding pure-material calculations, on two-material test problems with ideal-gas or stiffened-gas equations of state.

More Details

Algorithmic properties of the midpoint predictor-corrector time integrator

Love, Edward L.; Scovazzi, Guglielmo S.; Rider, William J.

Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.

More Details

Verification for ALEGRA using magnetized shock hydrodynamics problems

Gardiner, Thomas A.; Rider, William J.; Robinson, Allen C.

Two classical verification problems from shock hydrodynamics are adapted for verification in the context of ideal magnetohydrodynamics (MHD) by introducing strong transverse magnetic fields, and simulated using the finite element Lagrange-remap MHD code ALEGRA for purposes of rigorous code verification. The concern in these verification tests is that inconsistencies related to energy advection are inherent in Lagrange-remap formulations for MHD, such that conservation of the kinetic and magnetic components of the energy may not be maintained. Hence, total energy conservation may also not be maintained. MHD shock propagation may therefore not be treated consistently in Lagrange-remap schemes, as errors in energy conservation are known to result in unphysical shock wave speeds and post-shock states. That kinetic energy is not conserved in Lagrange-remap schemes is well known, and the correction of DeBar has been shown to eliminate the resulting errors. Here, the consequences of the failure to conserve magnetic energy are revealed using order verification in the two magnetized shock-hydrodynamics problems. Further, a magnetic analog to the DeBar correction is proposed and its accuracy evaluated using this verification testbed. Results indicate that only when the total energy is conserved, by implementing both the kinetic and magnetic components of the DeBar correction, can simulations in Lagrange-remap formulation capture MHD shock propagation accurately. Additional insight is provided by the verification results, regarding the implementation of the DeBar correction and the advection scheme.

More Details

On sub-linear convergence for linearly degenerate waves in capturing schemes

Journal of Computational Physics

Banks, Jeffrey W.; Aslam, T.; Rider, William J.

A common attribute of capturing schemes used to find approximate solutions to the Euler equations is a sub-linear rate of convergence with respect to mesh resolution. Purely nonlinear jumps, such as shock waves produce a first-order convergence rate, but linearly degenerate discontinuous waves, where present, produce sub-linear convergence rates which eventually dominate the global rate of convergence. The classical explanation for this phenomenon investigates the behavior of the exact solution to the numerical method in combination with the finite error terms, often referred to as the modified equation. For a first-order method, the modified equation produces the hyperbolic evolution equation with second-order diffusive terms. In the frame of reference of the traveling wave, the solution of a discontinuous wave consists of a diffusive layer that grows with a rate of t1/2, yielding a convergence rate of 1/2. Self-similar heuristics for higher-order discretizations produce a growth rate for the layer thickness of Δt1/(p+1) which yields an estimate for the convergence rate as p/(p + 1) where p is the order of the discretization. In this paper we show that this estimated convergence rate can be derived with greater rigor for both dissipative and dispersive forms of the discrete error. In particular, the form of the analytical solution for linear modified equations can be solved exactly. These estimates and forms for the error are confirmed in a variety of demonstrations ranging from simple linear waves to multidimensional solutions of the Euler equations. © 2008 Elsevier Inc.

More Details

ALEGRA: An arbitrary Lagrangian-Eulerian multimaterial, multiphysics code

46th AIAA Aerospace Sciences Meeting and Exhibit

Robinson, Allen C.; Brunner, Thomas A.; Carroll, Susan; Richarddrake; Garasi, Christopher J.; Gardiner, Thomas; Haill, Thomas; Hanshaw, Heath; Hensinger, David; Labreche, Duane; Lemke, Raymond; Love, Edward; Luchini, Christopher; Mosso, Stewart; Niederhaus, John; Ober, Curtis C.; Petney, Sharon; Rider, William J.; Scovazzi, Guglielmo; Strack, O.E.; Summers, Randall; Trucano, Timothy; Weirs, V.G.; Wong, Michael; Voth, Thomas

ALEGRA is an arbitrary Lagrangian-Eulerian (multiphysics) computer code developed at Sandia National Laboratories since 1990. The code contains a variety of physics options including magnetics, radiation, and multimaterial flow. The code has been developed for nearly two decades, but recent work has dramatically improved the code's accuracy and robustness. These improvements include techniques applied to the basic Lagrangian differencing, artificial viscosity and the remap step of the method including an important improvement in the basic conservation of energy in the scheme. We will discuss the various algorithmic improvements and their impact on the results for important applications. Included in these applications are magnetic implosions, ceramic fracture modeling, and electromagnetic launch. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.

More Details
99 Results
99 Results