The elemental equation governing heat transfer in aerodynamic flows is the internal energy equation. For a boundary layer flow, a double integration of the Reynolds-averaged form of this equation provides an expression of the wall heat flux in terms of the integrated effects, over the boundary layer, of various physical processes: turbulent dissipation, mean dissipation, turbulent heat flux, etc. Recently available direct numerical simulation data for a Mach 11 cold-wall turbulent boundary layer allows a comparison of the exact contributions of these terms in the energy equation to the wall heat flux with their counterparts modeled in the Reynolds-averaged Navier-Stokes (RANS) framework. Various approximations involved in RANS, both closure models as well as approximations involved in adapting incompressible RANS models to a compressible form, are assessed through examination of the internal energy balance. There are a number of potentially problematic assumptions and terms identified through this analysis. Here, the effect of compressibility corrections of the dilatational dissipation type is explored, as is the role of the modeled turbulent dissipation, in the context of wall heat flux predictions. The results indicate several potential avenues for RANS model improvement for hypersonic cold-wall boundary-layer flows.
Machine-learned models, specifically neural networks, are increasingly used as “closures” or “constitutive models” in engineering simulators to represent fine-scale physical phenomena that are too computationally expensive to resolve explicitly. However, these neural net models of unresolved physical phenomena tend to fail unpredictably and are therefore not used in mission-critical simulations. In this report, we describe new methods to authenticate them, i.e., to determine the (physical) information content of their training datasets, qualify the scenarios where they may be used and to verify that the neural net, as trained, adhere to physics theory. We demonstrate these methods with neural net closure of turbulent phenomena used in Reynolds Averaged Navier-Stokes equations. We show the types of turbulent physics extant in our training datasets, and, using a test flow of an impinging jet, identify the exact locations where the neural network would be extrapolating i.e., where it would be used outside the feature-space where it was trained. Using Generalized Linear Mixed Models, we also generate explanations of the neural net (à la Local Interpretable Model agnostic Explanations) at prototypes placed in the training data and compare them with approximate analytical models from turbulence theory. Finally, we verify our findings by reproducing them using two different methods.
The development of a next generation high-fidelity modeling code for wind plant applications is one of the central focus areas of the U.S. Department of Energy Atmosphere to Electrons (A2e) initiative. The code is based on a highly scalable framework, currently called Nalu-Wind. One key aspect of the model development is a coordinated formal validation program undertaken specifically to establish the predictive capability of Nalu-Wind for wind plant applications. The purpose of this document is to define the verification and validation (V&V) plan for the A2e high-fidelity modeling capability. It summarizes the V&V framework, identifies code capability users and use cases, describes model validation needs, and presents a timeline to meet those needs.
A blind CFD validation challenge is being organized for the unsteady transonic shock motion induced by the Sandia Axisymmetric Transonic Hump, which echoes the Bachalo-Johnson configuration. The wind tunnel and model geometry will be released at the start of the validation challenge along with flow boundary conditions. Primary data concerning the unsteady separation region will be released at the conclusion of the challenge after computational entrants have been submitted. This paper details the organization of the challenge, its schedule, and the metrics of comparison by which the models will be assessed.
An experimental characterization of the flow environment for the Sandia Axisymmetric Transonic Hump is presented. This is an axisymmetric model with a circular hump tested at a transonic Mach number, similar to the classic Bachalo-Johnson configuration. The flow is turbulent approaching the hump and becomes locally supersonic at the apex. This leads to a shock-wave/boundary-layer interaction, an unsteady separation bubble, and flow reattachment downstream. The characterization focuses on the quantities required to set proper boundary conditions for computational efforts described in the companion paper, including: 1) stagnation and test section pressure and temperature; 2) turbulence intensity; and 3) tunnel wall boundary layer profiles. Model characterization upstream of the hump includes: 1) surface shear stress; and 2) boundary layer profiles. Note: Numerical values characterizing the experiment have been redacted from this version of the paper. Model geometry and boundary conditions will be withheld until the official start of the Validation Challenge, at which time a revised version of this paper will become available. Data surrounding the hump are considered final results and will be withheld until completion of the Validation Challenge.
An implicit, low-dissipation, low-Mach, variable density control volume finite element formulation is used to explore foundational understanding of numerical accuracy for large-eddy simulation applications on hybrid meshes. Detailed simulation comparisons are made between low-order hexahedral, tetrahedral, pyramid, and wedge/prism topologies against a third-order, unstructured hexahedral topology. Using smooth analytical and manufactured low-Mach solutions, design-order convergence is established for the hexahedral, tetrahedral, pyramid, and wedge element topologies using a new open boundary condition based on energy-stable methodologies previously deployed within a finite-difference context. A wide range of simulations demonstrate that low-order hexahedral- and wedge-based element topologies behave nearly identically in both computed numerical errors and overall simulation timings. Moreover, low-order tetrahedral and pyramid element topologies also display nearly the same numerical characteristics. Although the superiority of the hexahedral-based topology is clearly demonstrated for trivial laminar, principally-aligned flows, e.g., a 1x2x10 channel flow with specified pressure drop, this advantage is reduced for non-aligned, turbulent flows including the Taylor–Green Vortex, turbulent plane channel flow (Reτ395), and buoyant flow past a heated cylinder. With the order of accuracy demonstrated for both homogenous and hybrid meshes, it is shown that solution verification for the selected complex flows can be established for all topology types. Although the number of elements in a mesh of like spacing comprised of tetrahedral, wedge, or pyramid elements increases as compared to the hexahedral counterpart, for wall-resolved large-eddy simulation, the increased assembly and residual evaluation computational time for non-hexahedral is offset by more efficient linear solver times. Finally, most simulation results indicate that modest polynomial promotion provides a significant increase in solution accuracy.
A new wind tunnel experiment is underway to provide a comprehensive CFD validation dataset of an unsteady, transonic flow. The experiment is based on the work of Bachalo and Johnson; an axisymmetric model with a spherical hump is tested at a transonic Mach number. The flow is turbulent approaching the hump and becomes locally supersonic at the apex. This leads to a shock-wave/boundary-layer interaction, an unsteady separation bubble, and flow reattachment downstream. A suite of diagnostics characterizes the flow: oil-flow surface visualization for shock and reattachment locations, particle image velocimetry for mean flow and turbulence properties, fast pressure-sensitive paint for model pressure distributions and unsteadiness, high-speed Schlieren for shock position and motion, and oil-film interferometry for surface shear stress. This will provide a new level of detail for validation studies; therefore, a blind comparison, or ‘CFD Challenge’ is proposed to the community. Participants are to be provided the geometry, incoming boundary layer, and boundary conditions, and are free to simulate with their method of choice and submit their results. A blind comparison will be made to the new experimental data, with the goal of evaluating the state of various CFD methods for use in unsteady, transonic flows.
Near-wall turbulence models in Large-Eddy Simulation (LES) typically approximate near-wall behavior using a solution to the mean flow equations. This approach inevitably leads to errors when the modeled flow does not satisfy the assumptions surrounding the use of a mean flow approximation for an unsteady boundary condition. Herein, modern machine learning (ML) techniques are utilized to implement a coordinate frame invariant model of the wall shear stress that is derived specifically for complex flows for which mean near-wall models are known to fail. The model operates on a set of scalar and vector invariants based on data taken from the first LES grid point off the wall. Neural networks were trained and validated on spatially filtered direct numerical simulation (DNS) data. The trained networks were then tested on data to which they were never previously exposed and comparisons of the accuracy of the networks’ predictions of wall-shear stress were made to both a standard mean wall model approach and to the true stress values taken from the DNS data. The ML approach showed considerable improvement in both the accuracy of individual shear stress predictions as well as produced a more accurate distribution of wall shear stress values than did the standard mean wall model. This result held both in regions where the standard mean approach typically performs satisfactorily as well as in regions where it is known to fail, and also in cases where the networks were trained and tested on data taken from the same flow type/region as well as when trained and tested on data from different respective flow topologies.
Deep - water offshore sites are an untapped opportunity to bring large - scale offshore wind energy to coastal population centers. The primary challenge has been the projected high costs for floating offshore wind systems. T his work presents a comprehensive investigat ion of a new opportunity for deep - water offshore wind using large - scale vertical axis wind turbines. Owing to inherent features of this technology , t here is a potential transformational opportunity to address the major cost drivers for floating w ind using vertical axis wind turbines . T he focus of this report is to evaluate the technical potential for this new technology. The approach to evaluating this potential wa s to perform system design studies focused on improving the understanding of technical performance parameters while l ooking for cost reduction opportunities. VAWT design codes we re developed in order to perform these design studies. To gain a better understanding of the desi gn space for floating VAWT systems , a comprehensive design study of multiple rotor configuration options was carried out . Floating platforms and moorings were then sized and evaluated for each of the candidate rotor configurations . Preliminary LCOE estimates and LCOE ranges were produced based on the design stu dy results for each of the major turbine and system components . The major outcomes of this study are a comprehensive technology assessment of VAWT performance and preliminary LCOE estimates that demonstrate that floating VAWTs may have favorable performanc e and costs in comparison to conventional HAWTs in the deep - water offshore environment where floating systems are required , indicating that this new technology warrants further study .
Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating ow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with \setup-up" costs can increase to nearly 50% of overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.
This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energy Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.
When faced with a restrictive evaluation budget that is typical of today's highfidelity simulation models, the effective exploitation of lower-fidelity alternatives within the uncertainty quantification (UQ) process becomes critically important. Herein, we explore the use of multifidelity modeling within UQ, for which we rigorously combine information from multiple simulation-based models within a hierarchy of fidelity, in seeking accurate high-fidelity statistics at lower computational cost. Motivated by correction functions that enable the provable convergence of a multifidelity optimization approach to an optimal high-fidelity point solution, we extend these ideas to discrepancy modeling within a stochastic domain and seek convergence of a multifidelity uncertainty quantification process to globally integrated high-fidelity statistics. For constructing stochastic models of both the low-fidelity model and the model discrepancy, we employ stochastic expansion methods (non-intrusive polynomial chaos and stochastic collocation) computed by integration/interpolation on structured sparse grids or regularized regression on unstructured grids. We seek to employ a coarsely resolved grid for the discrepancy in combination with a more finely resolved Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Grid for the low-fidelity model. The resolutions of these grids may be defined statically or determined through uniform and adaptive refinement processes. Adaptive refinement is particularly attractive, as it has the ability to preferentially target stochastic regions where the model discrepancy becomes more complex, i.e., where the predictive capabilities of the low-fidelity model start to break down and greater reliance on the high-fidelity model (via the discrepancy) is necessary. These adaptive refinement processes can either be performed separately for the different grids or within a coordinated multifidelity algorithm. In particular, we present an adaptive greedy multifidelity approach in which we extend the generalized sparse grid concept to consider candidate index set refinements drawn from multiple sparse grids, as governed by induced changes in the statistical quantities of interest and normalized by relative computational cost. Through a series of numerical experiments using statically defined sparse grids, adaptive multifidelity sparse grids, and multifidelity compressed sensing, we demonstrate that the multifidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance of the discrepancy is reduced relative to the variance of the high-fidelity model (resulting in reductions in initial stochastic error), where the spectrum of the expansion coefficients of the model discrepancy decays more rapidly than that of the high-fidelity model (resulting in accelerated convergence rates), and/or where the discrepancy is more sparse than the high-fidelity model (requiring the recovery of fewer significant terms).
In many aerospace applications, it is critical to be able to model fluid-structure interactions. In particular, correctly predicting the power spectral density of pressure fluctuations at surfaces can be important for assessing potential resonances and failure modes. Current turbulence modeling methods, such as wall-modeled Large Eddy Simulation and Detached Eddy Simulation, cannot reliably predict these pressure fluctuations for many applications of interest. The focus of this paper is on efforts to use data-driven machine learning methods to learn correction terms for the wall pressure fluctuation spectrum. In particular, the non-locality of the wall pressure fluctuations in a compressible boundary layer is investigated using random forests and neural networks trained and evaluated on Direct Numerical Simulation data.
Recent field experiments conducted in the near wake (up to 0.5 rotor diameters downwind of the rotor) of a Clipper Liberty C96 2.5 MW wind turbine using snow-based super-large-scale particle image velocimetry (SLPIV) (Hong et al., Nat. Commun., vol. 5, 2014, 4216) were successful in visualizing tip vortex cores as areas devoid of snowflakes. The so-visualized snow voids, however, suggested tip vortex cores of complex shape consisting of circular cores with distinct elongated comet-like tails. We employ large-eddy simulation (LES) to elucidate the structure and dynamics of the complex tip vortices identified experimentally. We show that the LES, with inflow conditions representing as closely as possible the state of the flow approaching the turbine when the SLPIV experiments were carried out, reproduce vortex cores in good qualitative agreement with the SLPIV results, essentially capturing all vortex core patterns observed in the field in the tip shear layer. The computed results show that the visualized vortex patterns are formed by the tip vortices and a second set of counter-rotating spiral vortices intertwined with the tip vortices. To probe the dependence of these newly uncovered coherent flow structures on turbine design, size and approach flow conditions, we carry out LES for three additional turbines: (i) the Scaled Wind Farm Technology (SWiFT) turbine developed by Sandia National Laboratories in Lubbock, TX, USA; (ii) the wind turbine developed for the European collaborative Mexico (Model Experiments in Controlled Conditions) project; and (iii) the model turbine presented in the paper by Lignarolo et al. (J. Fluid Mech., vol. 781, 2015, pp. 467-493), and the Clipper turbine under varying inflow turbulence conditions. We show that similar counter-rotating vortex structures as those observed for the Clipper turbine are also observed for the SWiFT, Mexico and model wind turbines. However, the strength of the counter-rotating vortices relative to that of the tip vortices from the model turbine is significantly weaker. We also show that incoming flows with low level turbulence attenuate the elongation of the tip and counter-rotating vortices. Sufficiently high turbulence levels in the incoming flow, on the other hand, tend to break up the coherence of spiral vortices in the near wake. To elucidate the physical mechanism that gives rise to such rich coherent dynamics we examine the stability of the turbine tip shear layer using the theory proposed by Leibovich & Stewartson (J. Fluid Mech., vol. 126, 1983, pp. 335-356). We show that for all simulated cases the theory consistently indicates the flow to be unstable exactly in the region where counter-rotating spirals emerge. We thus postulate that centrifugal instability of the rotating turbine tip shear layer is a possible mechanism for explaining the phenomena we have uncovered herein.
Vertical axis wind turbines are receiving significant attention for offshore siting. In general, offshore wind offers proximity to large populations centers, a vast & more consistent wind resource, and a scale-up opportunity, to name a few beneficial characteristics. On the other hand, offshore wind suffers from high levelized cost of energy (LCOE) and in particular high balance of system (BoS) costs owing to accessibility challenges and limited project experience. To address these challenges associated with offshore wind, Sandia National Laboratories is researching large-scale (MW class) offshore floating vertical axis wind turbines (VAWTs). The motivation for this work is that floating VAWTs are a potential transformative technology solution to reduce offshore wind LCOE in deep-water locations. This paper explores performance and cost trade-offs within the design space for floating VAWTs between the configurations for the rotor and platform.
This report summarizes FY16 progress towards enabling uncertainty quantification for compress- ible cavity simulations using model order reduction (MOR). The targeted application is the quan- tification of the captive-carry environment for the design and qualification of nuclear weapons systems. To accurately simulate this scenario, Large Eddy Simulations (LES) require very fine meshes and long run times, which lead to week -long runs even on parallel state-of-the-art super- computers. MOR can reduce substantially the CPU-time requirement for these simulations. We describe two approaches for model order reduction for nonlinear systems, which can yield sig- nificant speed-ups when combined with hyper-reduction: the Proper Orthogonal Decomposition (POD)/Galerkin approach and the POD/Least-Squares Petrov Galerkin (LSPG) approach. The im- plementation of these methods within the in-house compressible flow solver SPARC is discussed. Next, a method for stabilizing and enhancing low-dimensional reduced bases that was developed as a part of this project is detailed. This approach is based on a premise termed "minimal sub- space rotation", and has the advantage of yielding ROMs that are more stable and accurate for long-time compressible cavity simulations. Numerical results for some laminar cavity problems aimed at gauging the viability of the proposed model reduction methodologies are presented and discussed.
Simulations of the flow past a rectangular cavity containing a model captive store are performed using a hybrid Reynolds-averaged Navier–Stokes/large-eddy simulation model. Calculated pressure fluctuation spectra are validated using measurements made on the same configuration in a trisonic wind tunnel at Mach numbers of 0.60, 0.80, and 1.47. The simulation results are used to calculate unsteady integrated forces and moments acting on the store. Spectra of the forces and moments, along with correlations calculated for force/moment pairs, reveal that a complex relationship exists between the unsteady integrated forces and the measured resonant cavity modes, as indicated in the cavity wall pressure measurements. The structure of identified cavity resonant tones is examined by visualization of filtered surface pressure fields.