Publications

87 Results
Skip to search filters

Medium-Scale Methanol Pool Fire Model Validation

Journal of Heat Transfer

Hubbard, Joshua A.; Kirsch, Jared K.; Hewson, John C.; Hansen, Michael A.; Domino, Stefan P.

In this work, medium scale (30 cm diameter) methanol pool fires were simulated using the latest fire modeling suite implemented in Sierra/Fuego, a low Mach number multiphysics reacting flow code. The sensitivity of model outputs to various model parameters was studied with the objective of providing model validation. This work also assesses model performance relative to other recently published large eddy simulations (LES) of the same validation case. Two pool surface boundary conditions were simulated. The first was a prescribed fuel mass flux and the second used an algorithm to predict mass flux based on a mass and energy balance at the fuel surface. Gray gas radiation model parameters (absorption coefficients and gas radiation sources) were varied to assess radiant heat losses to the surroundings and pool surface. The radiation model was calibrated by comparing the simulated radiant fraction of the plume to experimental data. The effects of mesh resolution were also quantified starting with a grid resolution representative of engineering type fire calculations and then uniformly refining that mesh in the plume region. Simulation data were compared to experimental data collected at the University of Waterloo and the National Institute of Standards and Technology (NIST). Validation data included plume temperature, radial and axial velocities, velocity temperature turbulent correlations, velocity velocity turbulent correlations, radiant and convective heat fluxes to the pool surface, and plume radiant fraction. Additional analyses were performed in the pool boundary layer to assess simulated flame anchoring and the effect on convective heat fluxes. This work assesses the capability of the latest Fuego physics and chemistry model suite and provides additional insight into pool fire modeling for nonluminous, non-sooting flames.

More Details

Verification of Data-Driven Models of Physical Phenomena using Interpretable Approximation

Ray, Jaideep R.; Barone, Matthew F.; Domino, Stefan P.; Banerjee, Tania B.; Ranka, Sanjay R.

Machine-learned models, specifically neural networks, are increasingly used as “closures” or “constitutive models” in engineering simulators to represent fine-scale physical phenomena that are too computationally expensive to resolve explicitly. However, these neural net models of unresolved physical phenomena tend to fail unpredictably and are therefore not used in mission-critical simulations. In this report, we describe new methods to authenticate them, i.e., to determine the (physical) information content of their training datasets, qualify the scenarios where they may be used and to verify that the neural net, as trained, adhere to physics theory. We demonstrate these methods with neural net closure of turbulent phenomena used in Reynolds Averaged Navier-Stokes equations. We show the types of turbulent physics extant in our training datasets, and, using a test flow of an impinging jet, identify the exact locations where the neural network would be extrapolating i.e., where it would be used outside the feature-space where it was trained. Using Generalized Linear Mixed Models, we also generate explanations of the neural net (à la Local Interpretable Model agnostic Explanations) at prototypes placed in the training data and compare them with approximate analytical models from turbulence theory. Finally, we verify our findings by reproducing them using two different methods.

More Details

Viral Fate and Transport for COVID-19 - NVBL

Negrete, Oscar N.; Domino, Stefan P.; Ho, Clifford K.

The NVBL Viral Fate and Transport Team includes researchers from eleven DOE national laboratories and is utilizing unique experimental facilities combined with physics-based and data-driven modeling and simulation to study the transmission, transport, and fate of SARSCoV-2. The team was focused on understanding and ultimately predicting SARS-CoV-2 viability in varied environments with the goal of rapidly informing strategies that guide the nation’s resumption of normal activities. The primary goals of this project include prioritizing administrative and engineering controls that reduce the risk of SARS-CoV-2 transmission within an enclosed environment; identifying the chemical and physical properties that influence binding of SARS-CoV-2 to common surfaces; and understanding the contribution of environmental reservoirs and conditions on transmission and resurgence of SARS-CoV-2.

More Details

Predicting large-scale pool fire dynamics using an unsteady flamelet- And large-eddy simulation-based model suite

Physics of Fluids

Domino, Stefan P.; Hewson, John C.; Knaus, Robert C.; Hansen, Michael A.

A low-Mach, unstructured, large-eddy-simulation-based, unsteady flamelet approach with a generalized heat loss combustion methodology (including soot generation and consumption mechanisms) is deployed to support a large-scale, quiescent, 5-m JP-8 pool fire validation study. The quiescent pool fire validation study deploys solution sensitivity procedures, i.e., the effect of mesh and time step refinement on capturing key fire dynamics such as fingering and puffing, as mesh resolutions approach O(1) cm. A novel design-order, discrete-ordinate-method discretization methodology is established by use of an analytical thermal/participating media radiation solution on both low-order hexahedral and tetrahedral mesh topologies in addition to quadratic hexahedral elements. The coupling between heat losses and the flamelet thermochemical state is achieved by augmenting the unsteady flamelet equation set with a heat loss source term. Soot and radiation source terms are determined using flamelet approaches for the full range of heat losses experienced in fire applications including radiative extinction. The proposed modeling and simulation paradigm are validated using pool surface radiative heat flux, maximum centerline temperature location, and puffing frequency data, all of which are predicted within 10% accuracy. Simulations demonstrate that under-resolved meshes predict an overly conservative radiative heat flux magnitude with improved comparisons as compared to a previously deployed hybrid Reynolds-averaged Navier-Stokes/eddy dissipation concept-based methodology.

More Details

A multi-physics computational investigation of droplet pathogen transport emanating from synthetic coughs and breathing

Atomization and Sprays

Domino, Stefan P.; Pierce, Flint P.; Hubbard, Joshua A.

In response to the global SARS-CoV-2 transmission pandemic, Sandia National Laboratories Rapid Lab-Directed Research and Development COVID-19 initiative has deployed a multi-physics, droplet-laden, turbulent low-Mach simulation tool to model pathogen-containing water droplets that emanate from synthetic human coughing and breathing. The low-Mach turbulent large-eddy simulation-based Eulerian/point-particle Lagrangian methodology directly couples mass, momentum, energy, and species to capture droplet evaporation physics that supports the ability to distinguish between droplets that deposit and those that persist in the environment. The cough mechanism is modeled as a pulsed spray with a prescribed log-normal droplet size distribution. Simulations demonstrate direct droplet deposition lengths in excess of three meters while the persistence of droplet nuclei entrained within a buoyant plume is noted. Including the effect of protective barriers demonstrates effective mitigation of large-droplet transport. For coughs into a protective barrier, jet impingement and large-scale recirculation can drive droplets vertically and back toward the subject while supporting persistence of droplet nuclei. Simulations in quiescent conditions demonstrate droplet preferential concentrations due to the coupling between vortex ring shedding and the subsequent advection of a series of three-dimensional rings that tilt and rise vertically due to a misalignment between the initial principle vortex trajectory and gravity. These resolved coughing simulations note vortex ring formation, roll-up and breakdown, while entraining droplet nuclei for large distances and time scales.

More Details

A Case Study on Pathogen Transport, Deposition, Evaporation and Transmission: Linking High-Fidelity Computational Fluid Dynamics Simulations to Probability of Infection

International Journal of Computational Fluid Dynamics

Domino, Stefan P.

A high-fidelity, low-Mach computational fluid dynamics simulation tool that includes evaporating droplets and variable-density turbulent flow coupling is well-suited to ascertain transmission probability and supports risk mitigation methods development for airborne infectious diseases such as COVID-19. A multi-physics large-eddy simulation-based paradigm is used to explore droplet and aerosol pathogen transport from a synthetic cough emanating from a kneeling humanoid. For an outdoor configuration that mimics the recent open-space social distance strategy of San Francisco, maximum primary droplet deposition distances are shown to approach 8.1 m in a moderate wind configuration with the aerosol plume transported in excess of 15 m. In quiescent conditions, the aerosol plume extends to approximately 4 m before the emanating pulsed jet becomes neutrally buoyant. A dose–response model, which is based on previous SARS coronavirus (SARS-CoV) data, is exercised on the high-fidelity aerosol transport database to establish relative risk at eighteen virtual receptor probe locations.

More Details

An assessment of atypical mesh topologies for low-Mach large-eddy simulation

Computers and Fluids

Domino, Stefan P.; Sakievich, Philip S.; Barone, Matthew F.

An implicit, low-dissipation, low-Mach, variable density control volume finite element formulation is used to explore foundational understanding of numerical accuracy for large-eddy simulation applications on hybrid meshes. Detailed simulation comparisons are made between low-order hexahedral, tetrahedral, pyramid, and wedge/prism topologies against a third-order, unstructured hexahedral topology. Using smooth analytical and manufactured low-Mach solutions, design-order convergence is established for the hexahedral, tetrahedral, pyramid, and wedge element topologies using a new open boundary condition based on energy-stable methodologies previously deployed within a finite-difference context. A wide range of simulations demonstrate that low-order hexahedral- and wedge-based element topologies behave nearly identically in both computed numerical errors and overall simulation timings. Moreover, low-order tetrahedral and pyramid element topologies also display nearly the same numerical characteristics. Although the superiority of the hexahedral-based topology is clearly demonstrated for trivial laminar, principally-aligned flows, e.g., a 1x2x10 channel flow with specified pressure drop, this advantage is reduced for non-aligned, turbulent flows including the Taylor–Green Vortex, turbulent plane channel flow (Reτ395), and buoyant flow past a heated cylinder. With the order of accuracy demonstrated for both homogenous and hybrid meshes, it is shown that solution verification for the selected complex flows can be established for all topology types. Although the number of elements in a mesh of like spacing comprised of tetrahedral, wedge, or pyramid elements increases as compared to the hexahedral counterpart, for wall-resolved large-eddy simulation, the increased assembly and residual evaluation computational time for non-hexahedral is offset by more efficient linear solver times. Finally, most simulation results indicate that modest polynomial promotion provides a significant increase in solution accuracy.

More Details

Nalu's Linear System Assembly using Tpetra

Domino, Stefan P.; Williams, Alan B.

The Nalu Exascale Wind application assembles linear systems using data structures provided by the Tpetra package in Trilinos. This note describes the initialization and assembly process. The purpose of this note is to help Nalu developers and maintainers to understand the code surrounding linear system assembly, in order to facilitate debugging, optimizations, and maintenance. 1 1 Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. De- partment of Energy's National Nuclear Security Administration under contract DE-NA0003525. This report followed the Sandia National Laboratories formal review and approval process (SAND2019-0120), and is suitable for unlimited release. This page intentionally left blank.

More Details

Decrease time-to-solution through improved linear-system setup and solve

Hu, Jonathan J.; Thomas, Stephen T.; Dohrmann, Clark R.; Ananthan, Shreyas A.; Domino, Stefan P.; Williams, Alan B.; Sprague, Michael S.

The goal of the ExaWind project is to enable predictive simulations of wind farms composed of many MW-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources. The primary code in the ExaWind project is Nalu, which is an unstructured-grid solver for the acoustically-incompressible Navier-Stokes equations, and mass continuity is maintained through pressure projection. The model consists of the mass-continuity Poisson-type equation for pressure and a momentum equation for the velocity. For such modeling approaches, simulation times are dominated by linear-system setup and solution for the continuity and momentum systems. For the ExaWind challenge problem, the moving meshes greatly affect overall solver costs as re-initialization of matrices and re-computation of preconditioners is required at every time step We describe in this report our efforts to decrease the setup and solution time for the mass-continuity Poisson system with respect to the benchmark timing results reported in FY18 Q1. In particular, we investigate improving and evaluating two types of algebraic multigrid (AMG) preconditioners: Classical Ruge-Stfiben AMG (C-AMG) and smoothed-aggregation AMG (SA-AMG), which are implemented in the Hypre and Trilinos/MueLu software stacks, respectively. Preconditioner performance was optimized through existing capabilities and settings.

More Details

Design-order, non-conformal low-Mach fluid algorithms using a hybrid CVFEM/DG approach

Journal of Computational Physics

Domino, Stefan P.

A hybrid, design-order sliding mesh algorithm, which uses a control volume finite element method (CVFEM), in conjunction with a discontinuous Galerkin (DG) approach at non-conformal interfaces, is outlined in the context of a low-Mach fluid dynamics equation set. This novel hybrid DG approach is also demonstrated to be compatible with a classic edge-based vertex centered (EBVC) scheme. For the CVFEM, element polynomial, P, promotion is used to extend the low-order P=1 CVFEM method to higher-order, i.e., P=2. An equal-order low-Mach pressure-stabilized methodology, with emphasis on the non-conformal interface boundary condition, is presented. A fully implicit matrix solver approach that accounts for the full stencil connectivity across the non-conformal interface is employed. A complete suite of formal verification studies using the method of manufactured solutions (MMS) is performed to verify the order of accuracy of the underlying methodology. The chosen suite of analytical verification cases range from a simple steady diffusion system to a traveling viscous vortex across mixed-order non-conformal interfaces. Results from all verification studies demonstrate either second- or third-order spatial accuracy and, for transient solutions, second-order temporal accuracy. Significant accuracy gains in manufactured solution error norms are noted even with modest promotion of the underlying polynomial order. The paper also demonstrates the CVFEM/DG methodology on two production-like simulation cases that include an inner block subjected to solid rotation, i.e., each of the simulations include a sliding mesh, non-conformal interface. The first production case presented is a turbulent flow past a high-rate-of-rotation cube (Re, 4000; RPM, 3600) on like and mixed-order polynomial interfaces. The final simulation case is a full-scale Vestas V27 225 kW wind turbine (tower and nacelle omitted) in which a hybrid topology, low-order mesh is used. Both production simulations provide confidence in the underlying capability and demonstrate the viability of this hybrid method for deployment towards high-fidelity wind energy validation and analysis.

More Details

Decrease time-to-solution through improved linear-system setup and solve

Hu, Jonathan J.; Thomas, Stephen T.; Dohrmann, Clark R.; Ananthan, Shreyas A.; Domino, Stefan P.; Williams, Alan B.; Sprague, Michael S.

The goal of the ExaWind project is to enable predictive simulations of wind farms composed of many MW-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.

More Details

Deploy production sliding mesh capability with linear solver benchmarking

Domino, Stefan P.; Barone, Matthew F.; Williams, Alan B.; Knaus, Robert C.

Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating ow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with \setup-up" costs can increase to nearly 50% of overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.

More Details

Milestone Deliverable: FY18-Q1: Deploy production sliding mesh capability with linear solver benchmarking

Domino, Stefan P.

This milestone was focused on deploying and verifying a “sliding-mesh interface,” and establishing baseline timings for blade-resolved simulations of a sub-MW-scale turbine. In the ExaWind project, we are developing both sliding-mesh and overset-mesh approaches for handling the rotating blades in an operating wind turbine. In the sliding-mesh approach, the turbine rotor and its immediate surrounding fluid are captured in a “disk” that is embedded in the larger fluid domain. The embedded fluid is simulated in a coordinate system that rotates with the rotor. It is important that the coupling algorithm (and its implementation) between the rotating and inertial discrete models maintains the accuracy of the numerical methods on either side of the interface, i.e., the interface is “design order.”

More Details

Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking

Domino, Stefan P.; Williams, Alan B.; Knaus, Robert C.

The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability on many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.

More Details

Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms

Barone, Matthew F.; Ananthan, Shreyas A.; Churchfield, Matt C.; Domino, Stefan P.; Henry de Frahan, Marc T.; Knaus, Robert C.; Melvin, Jeremy M.; Moser, Robert M.; Sprague, Michael S.; Thomas, Stephen T.

This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energy Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.

More Details

Multifidelity uncertainty quantification using spectral stochastic discrepancy models

Handbook of Uncertainty Quantification

Eldred, Michael S.; Ng, Leo W.T.; Barone, Matthew F.; Domino, Stefan P.

When faced with a restrictive evaluation budget that is typical of today's highfidelity simulation models, the effective exploitation of lower-fidelity alternatives within the uncertainty quantification (UQ) process becomes critically important. Herein, we explore the use of multifidelity modeling within UQ, for which we rigorously combine information from multiple simulation-based models within a hierarchy of fidelity, in seeking accurate high-fidelity statistics at lower computational cost. Motivated by correction functions that enable the provable convergence of a multifidelity optimization approach to an optimal high-fidelity point solution, we extend these ideas to discrepancy modeling within a stochastic domain and seek convergence of a multifidelity uncertainty quantification process to globally integrated high-fidelity statistics. For constructing stochastic models of both the low-fidelity model and the model discrepancy, we employ stochastic expansion methods (non-intrusive polynomial chaos and stochastic collocation) computed by integration/interpolation on structured sparse grids or regularized regression on unstructured grids. We seek to employ a coarsely resolved grid for the discrepancy in combination with a more finely resolved Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Grid for the low-fidelity model. The resolutions of these grids may be defined statically or determined through uniform and adaptive refinement processes. Adaptive refinement is particularly attractive, as it has the ability to preferentially target stochastic regions where the model discrepancy becomes more complex, i.e., where the predictive capabilities of the low-fidelity model start to break down and greater reliance on the high-fidelity model (via the discrepancy) is necessary. These adaptive refinement processes can either be performed separately for the different grids or within a coordinated multifidelity algorithm. In particular, we present an adaptive greedy multifidelity approach in which we extend the generalized sparse grid concept to consider candidate index set refinements drawn from multiple sparse grids, as governed by induced changes in the statistical quantities of interest and normalized by relative computational cost. Through a series of numerical experiments using statically defined sparse grids, adaptive multifidelity sparse grids, and multifidelity compressed sensing, we demonstrate that the multifidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance of the discrepancy is reduced relative to the variance of the high-fidelity model (resulting in reductions in initial stochastic error), where the spectrum of the expansion coefficients of the model discrepancy decays more rapidly than that of the high-fidelity model (resulting in accelerated convergence rates), and/or where the discrepancy is more sparse than the high-fidelity model (requiring the recovery of fewer significant terms).

More Details

Uncertainty quantification in LES of channel flow

International Journal for Numerical Methods in Fluids

Safta, Cosmin S.; Blaylock, Myra L.; Templeton, Jeremy A.; Domino, Stefan P.; Sargsyan, Khachik S.; Najm, H.N.

In this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence and are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for. Copyright © 2016 John Wiley & Sons, Ltd.

More Details

Model sensitivities in LES predictions of buoyant methane fire plumes

2017 Fall Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2017

Koo, Heeseok K.; Hewson, John C.; Domino, Stefan P.; Knaus, Robert C.

A 1-m diameter methane fire plume has been studied using a large eddy simulation (LES) methodology. Eddy dissipation concept (EDC) and steady flamelet combustion models were used to describe interactions between buoyancy-induced turbulence and gas-phase combustion. Detailed comparisons with experimental data showed that the simulation is sensitive to the combustion model and mesh resolution. In particular, any excessive mixing results in a wider and more diffusive plume. As mesh resolution increases, the current simulations demonstrate a tendency toward excessive mixing.

More Details

Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

Templeton, Jeremy A.; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi R.; Ling, Julia L.; Najm, H.N.; Ruiz, Anthony R.; Safta, Cosmin S.; Sargsyan, Khachik S.; Stewart, Alessia S.; Wagner, Gregory L.

The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

More Details

Towards extreme-scale simulations for low mach fluids with second-generation trilinos

Parallel Processing Letters

Lin, Paul L.; Bettencourt, Matthew T.; Domino, Stefan P.; Fisher, Travis C.; Hoemmen, Mark F.; Hu, Jonathan J.; Phipps, Eric T.; Prokopenko, Andrey V.; Rajamanickam, Sivasankaran R.; Siefert, Christopher S.; Kennon, Stephen

Trilinos is an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. While Trilinos was originally designed for scalable solutions of large problems, the fidelity needed by many simulations is significantly greater than what one could have envisioned two decades ago. When problem sizes exceed a billion elements even scalable applications and solver stacks require a complete revision. The second-generation Trilinos employs C++ templates in order to solve arbitrarily large problems. We present a case study of the integration of Trilinos with a low Mach fluids engineering application (SIERRA low Mach module/Nalu). Through the use of improved algorithms and better software engineering practices, we demonstrate good weak scaling for up to a nine billion element large eddy simulation (LES) problem on unstructured meshes with a 27 billion row matrix on 524,288 cores of an IBM Blue Gene/Q platform.

More Details

Mesoscale to plant-scale models of nuclear waste reprocessing

Rao, Rekha R.; Pawlowski, Roger P.; Brotherton, Christopher M.; Cipiti, Benjamin B.; Domino, Stefan P.; Jove Colon, Carlos F.; Moffat, Harry K.; Nemer, Martin N.; Noble, David R.; O'Hern, Timothy J.

Imported oil exacerabates our trade deficit and funds anti-American regimes. Nuclear Energy (NE) is a demonstrated technology with high efficiency. NE's two biggest political detriments are possible accidents and nuclear waste disposal. For NE policy, proliferation is the biggest obstacle. Nuclear waste can be reduced through reprocessing, where fuel rods are separated into various streams, some of which can be reused in reactors. Current process developed in the 1950s is dirty and expensive, U/Pu separation is the most critical. Fuel rods are sheared and dissolved in acid to extract fissile material in a centrifugal contactor. Plants have many contacts in series with other separations. We have taken a science and simulation-based approach to develop a modern reprocessing plant. Models of reprocessing plants are needed to support nuclear materials accountancy, nonproliferation, plant design, and plant scale-up.

More Details

Validation and uncertainty quantification of Fuego simulations of calorimeter heating in a wind-driven hydrocarbon pool fire

Luketa, Anay L.; Romero, Vicente J.; Domino, Stefan P.; Glaze, D.J.; Figueroa Faria, Victor G.

The objective of this work is to perform an uncertainty quantification (UQ) and model validation analysis of simulations of tests in the cross-wind test facility (XTF) at Sandia National Laboratories. In these tests, a calorimeter was subjected to a fire and the thermal response was measured via thermocouples. The UQ and validation analysis pertains to the experimental and predicted thermal response of the calorimeter. The calculations were performed using Sierra/Fuego/Syrinx/Calore, an Advanced Simulation and Computing (ASC) code capable of predicting object thermal response to a fire environment. Based on the validation results at eight diversely representative TC locations on the calorimeter the predicted calorimeter temperatures effectively bound the experimental temperatures. This post-validates Sandia's first integrated use of fire modeling with thermal response modeling and associated uncertainty estimates in an abnormal-thermal QMU analysis.

More Details

Highly scalable linear solvers on thousands of processors

Siefert, Christopher S.; Tuminaro, Raymond S.; Domino, Stefan P.; Robinson, Allen C.

In this report we summarize research into new parallel algebraic multigrid (AMG) methods. We first provide a introduction to parallel AMG. We then discuss our research in parallel AMG algorithms for very large scale platforms. We detail significant improvements in the AMG setup phase to a matrix-matrix multiplication kernel. We present a smoothed aggregation AMG algorithm with fewer communication synchronization points, and discuss its links to domain decomposition methods. Finally, we discuss a multigrid smoothing technique that utilizes two message passing layers for use on multicore processors.

More Details

A turbulence model for buoyant flows based on vorticity generation

Nicolette, Vernon F.; Tieszen, Sheldon R.; Black, Amalia R.; Domino, Stefan P.; O'Hern, Timothy J.

A turbulence model for buoyant flows has been developed in the context of a k-{var_epsilon} turbulence modeling approach. A production term is added to the turbulent kinetic energy equation based on dimensional reasoning using an appropriate time scale for buoyancy-induced turbulence taken from the vorticity conservation equation. The resulting turbulence model is calibrated against far field helium-air spread rate data, and validated with near source, strongly buoyant helium plume data sets. This model is more numerically stable and gives better predictions over a much broader range of mesh densities than the standard k-{var_epsilon} model for these strongly buoyant flows.

More Details

Validation of a simple turbulence model suitable for closure of temporally-filtered Navier-Stokes equations using a helium plume

Domino, Stefan P.; Black, Amalia R.

A validation study has been conducted for a turbulence model used to close the temporally filtered Navier Stokes (TFNS) equations. A turbulence model was purposely built to support fire simulations under the Accelerated Strategic Computing (ASC) program. The model was developed so that fire transients could be simulated and it has been implemented in SIERRA/Fuego. The model is validated using helium plume data acquired for the Weapon System Certification Campaign (C6) program in the Fire Laboratory for Model Accreditation and Experiments (FLAME). The helium plume experiments were chosen as the first validation problem for SIERRA/Fuego because they embody the first pair-wise coupling of scalar and momentum fields found in fire plumes. The validation study includes solution verification through grid and time step refinement studies. A formal statistical comparison is used to assess the model uncertainty. The metric uses the centerline vertical velocity of the plume. The results indicate that the simple model is within the 95% confidence interval of the data for elevations greater than 0.4 meters and is never more than twice the confidence interval from the data. The model clearly captures the dominant puffing mode in the fire but under resolves the vorticity field. Grid dependency of the model is noted.

More Details
87 Results
87 Results