Some existing approaches to modelling the thermodynamics of moist air make approximations that break thermodynamic consistency, such that the resulting thermodynamics does not obey the first and second laws or has other inconsistencies. Recently, an approach to avoid such inconsistency has been suggested: the use of thermodynamic potentials in terms of their natural variables, from which all thermodynamic quantities and relationships (equations of state) are derived. In this article, we develop this approach for unapproximated moist-air thermodynamics and two widely used approximations: the constant-κ approximation and the dry heat capacities approximation. The (consistent) constant-κ approximation is particularly attractive because it leads to, with the appropriate choice of thermodynamic variable, adiabatic dynamics that depend only on total mass and are independent of the breakdown between water forms. Additionally, a wide variety of material from different sources in the literature on thermodynamics in atmospheric modelling is brought together. It is hoped that this article provides a comprehensive reference for the use of thermodynamic potentials in atmospheric modelling, especially for the three systems considered here.
We present a new evaluation framework for implicit and explicit (IMEX) Runge-Kutta time-stepping schemes. The new framework uses a linearized nonhydrostatic system of normal modes. We utilize the framework to investigate the stability of IMEX methods and their dispersion and dissipation of gravity, Rossby, and acoustic waves. We test the new framework on a variety of IMEX schemes and use it to develop and analyze a set of second-order low-storage IMEX Runge-Kutta methods with a high Courant-Friedrichs-Lewy (CFL) number. We show that the new framework is more selective than the 2-D acoustic system previously used in the literature. Schemes that are stable for the 2-D acoustic system are not stable for the system of normal modes.
We present an effort to port the nonhydrostatic atmosphere dynamical core of the Energy Exascale Earth System Model (E3SM) to efficiently run on a variety of architectures, including conventional CPU, many-core CPU, and GPU. We specifically target cloud-resolving resolutions of 3 km and 1 km. To express on-node parallelism we use the C++ library Kokkos, which allows us to achieve a performance portable code in a largely architecture-independent way. Our C++ implementation is at least as fast as the original Fortran implementation on IBM Power9 and Intel Knights Landing processors, proving that the code refactor did not compromise the efficiency on CPU architectures. On the other hand, when using the GPUs, our implementation is able to achieve 0.97 Simulated Years Per Day, running on the full Summit supercomputer. To the best of our knowledge, this is the most achieved to date by any global atmosphere dynamical core running at such resolutions.
We present a Fourier analysis of wave propagation problems subject to a class of continuous and discontinuous discretizations using high-degree Lagrange polynomials. This allows us to obtain explicit analytical formulas for the dispersion relation and group velocity and, for the first time to our knowledge, characterize analytically the emergence of gaps in the dispersion relation at specific wavenumbers, when they exist, and compute their specific locations. Wave packets with energy at these wavenumbers will fail to propagate correctly, leading to significant numerical dispersion. We also show that the Fourier analysis generates mathematical artifacts, and we explain how to remove them through a branch selection procedure conducted by analysis of eigenvectors and associated reconstructed solutions. The higher frequency eigenmodes, named erratic in this study, are also investigated analytically and numerically.
We derive a formulation of the nonhydrostatic equations in spherical geometry with a Lorenz staggered vertical discretization. The combination conserves a discrete energy in exact time integration when coupled with a mimetic horizontal discretization. The formulation is a version of Dubos and Tort (2014, https://doi.org/10.1175/MWR-D-14-00069.1) rewritten in terms of primitive variables. It is valid for terrain following mass or height coordinates and for both Eulerian or vertically Lagrangian discretizations. The discretization relies on an extension to Simmons and Burridge (1981, https://doi.org/10.1175/1520-0493(1981)109<0758:AEAAMC>2.0.CO;2) vertical differencing, which we show obeys a discrete derivative product rule. This product rule allows us to simplify the treatment of the vertical transport terms. Energy conservation is obtained via a term-by-term balance in the kinetic, internal, and potential energy budgets, ensuring an energy-consistent discretization up to time truncation error with no spurious sources of energy. We demonstrate convergence with respect to time truncation error in a spectral element code with a horizontal explicit vertically implicit implicit-explicit time stepping algorithm.
Hillman, Benjamin H.; Taylor, Mark A.; Hannah, Walter H.; Jones, Christopher G.; Norman, Matt N.; Lee, Jungmin L.; Pressel, Kyle P.; Pritchard, Mike P.; Bader, David C.; Leung, Ruby L.
We present a new method for reducing parallel applications’ communication time by mapping their MPI tasks to processors in a way that lowers the distance messages travel and the amount of congestion in the network. Assuming geometric proximity among the tasks is a good approximation of their communication interdependence, we use a geometric partitioning algorithm to order both the tasks and the processors, assigning task parts to the corresponding processor parts. In this way, interdependent tasks are assigned to “nearby” cores in the network. We also present a number of algorithmic optimizations that exploit specific features of the network or application to further improve the quality of the mapping. We specifically address the case of sparse node allocation, where the nodes assigned to a job are not necessarily located in a contiguous block nor within close proximity to each other in the network. However, our methods generalize to contiguous allocations as well, and results are shown for both contiguous and non-contiguous allocations. We show that, for the structured finite difference mini-application MiniGhost, our mapping methods reduced communication time up to 75 percent relative to MiniGhost’s default mapping on 128K cores of a Cray XK7 with sparse allocation. For the atmospheric modeling code E3SM/HOMME, our methods reduced communication time up to 31% on 16K cores of an IBM BlueGene/Q with contiguous allocation.
We present an architecture-portable and performant implementation of the atmospheric dynamical core (High-Order Methods Modeling Environment, HOMME) of the Energy Exascale Earth System Model (E3SM). The original Fortran implementation is highly performant and scalable on conventional architectures using the Message Passing Interface (MPI) and Open MultiProcessor (OpenMP) programming models. We rewrite the model in C++ and use the Kokkos library to express on-node parallelism in a largely architecture-independent implementation. Kokkos provides an abstraction of a compute node or device, layout-polymorphic multidimensional arrays, and parallel execution constructs. The new implementation achieves the same or better performance on conventional multicore computers and is portable to GPUs. We present performance data for the original and new implementations on multiple platforms, on up to 5400 compute nodes, and study several aspects of the single-and multi-node performance characteristics of the new implementation on conventional CPU (e.g., Intel Xeon), many core CPU (e.g., Intel Xeon Phi Knights Landing), and Nvidia V100 GPU.
Atmospheric tracer transport is a computationally demanding component of the atmospheric dynamical core of weather and climate simulations. Simulations typically have tens to hundreds of tracers. A tracer field is required to preserve several properties, including mass, shape, and tracer consistency. To improve computational efficiency, it is common to apply different spatial and temporal discretizations to the tracer transport equations than to the dynamical equations. Using different discretizations increases the difficulty of preserving properties. This paper provides a unified framework to analyze the property preservation problem and classes of algorithms to solve it. We examine the primary problem and a safety problem; describe three classes of algorithms to solve these; introduce new algorithms in two of these classes; make connections among the algorithms; analyze each algorithm in terms of correctness, bound on its solution magnitude, and its communication efficiency; and study numerical results. A new algorithm, QLT, has the smallest communication volume, and in an important case it redistributes mass approximately locally. These algorithms are only very loosely coupled to the underlying discretizations of the dynamical and tracer transport equations and thus are broadly and efficiently applicable. In addition, they may be applied to remap problems in applications other than tracer transport.
Herrington, Adam R.; Lauritzen, Peter H.; Taylor, Mark A.; Goldhaber, Steve; Eaton, Brian E.; Bacmeister, Julio T.; Reed, Kevin A.; Ullrich, Paul A.
Atmospheric modeling with element-based high-order Galerkin methods presents a unique challenge to the conventional physics-dynamics coupling paradigm, due to the highly irregular distribution of nodes within an element and the distinct numerical characteristics of the Galerkin method. The conventional coupling procedure is to evaluate the physical parameterizations (physics) on the dynamical core grid. Evaluating the physics at the nodal points exacerbates numerical noise from the Galerkin method, enabling and amplifying local extrema at element boundaries. Grid imprinting may be substantially reduced through the introduction of an entirely separate, approximately isotropic finite-volume grid for evaluating the physics forcing. Integration of the spectral basis over the control volumes provides an area-average state to the physics, which is more representative of the state in the vicinity of the nodal points rather than the nodal point itself and is more consistent with the notion of a ''large-scale state'' required by conventional physics packages. This study documents the implementation of a quasi-equal-area physics grid into NCAR's Community Atmosphere Model Spectral Element and is shown to be effective at mitigating grid imprinting in the solution. The physics grid is also appropriate for coupling to other components within the Community Earth System Model, since the coupler requires component fluxes to be defined on a finite-volume grid, and one can be certain that the fluxes on the physics grid are, indeed, volume averaged.
A set of algorithms based on characteristic discontinuous Galerkin methods is presented for tracer transport on the sphere. The algorithms are designed to reduce message passing interface communication volume per unit of simulated time relative to current methods generally, and to the spectral element scheme employed by the U.S. Department of Energy's Exascale Earth System Model (E3SM) specifically. Two methods are developed to enforce discrete mass conservation when the transport schemes are coupled to a separate dynamics solver; constrained transport and Jacobian-combined transport. A communication-efficient method is introduced to enforce tracer consistency between the transport scheme and dynamics solver; this method also provides the transport scheme's shape preservation capability. A subset of the algorithms derived here is implemented in E3SM and shown to improve transport performance by a factor of 2.2 for the model's standard configuration with 40 tracers at the strong scaling limit of one element per core.
Concern over Arctic methane (CH 4 ) emissions has increased following recent discoveries of poorly understood sources and predictions that methane emissions from known sources will grow as Arctic temperatures increase. New efforts are required to detect increases and explain sources without being confounded by the multiple sources. Methods for distinguishing different sources are critical. We conducted measurements of atmospheric methane and source tracers and performed baseline global atmospheric modeling to begin assessing the climate impact of changes in atmospheric methane. The goal of this project was to address uncertainties in Arctic methane sources and their potential impact on climate by (1) deploying newly developed trace-gas analyzers for measurements of methane, methane isotopologues, ethane, and other tracers of methane sources in the Barrow, AK, (2) characterizing methane sources using high-resolution atmospheric chemical transport models and tracer measurements, and (3) modeling Arctic climate using the state-of-the-art high- resolution Spectral Element Community Atmosphere Model (CAM-SE).
This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.
The Department of Energy’s (DOE) Biological and Environmental Research project, “Water Cycle and Climate Extremes Modeling” is improving our understanding and modeling of regional details of the Earth’s water cycle. Sandia is using high resolution model behavior to investigate storms in the Arctic.
The cubed sphere geometry, obtained by inscribing a cube in a sphere and mapping points between the two surfaces using a gnomonic (central) projection, is commonly used in atmospheric models because it is free of polar singularities and is well-suited for parallel computing. Global meshes on the cubed-sphere typically project uniform (square) grids from each face of the cube onto the sphere, and if refinement is desired then it is done with non-conforming meshes - overlaying the area of interest with a finer uniform mesh, which introduces so-called hanging nodes on edges along the boundary of the fine resolution area. An alternate technique is to tile each face of the cube with quadrilaterals without requiring the quads to be rectangular. These meshes allow for refinement in areas of interest with a conforming mesh, providing a smoother transition between high and low resolution portions of the grid than non-conforming refinement. The conforming meshes are demonstrated in HOMME, NCAR's High Order Method Modeling Environment, where two modifications have been made: the dependence on uniform meshes has been removed, and the ability to read arbitrary quadrilateral meshes from a previously-generated file has been added. Numerical results come from a conservative spectral element method modeling a selection of the standard shallow water test cases.
The Arctic region is rapidly changing in a way that will affect the rest of the world. Parts of Alaska, western Canada, and Siberia are currently warming at twice the global rate. This warming trend is accelerating permafrost deterioration, coastal erosion, snow and ice loss, and other changes that are a direct consequence of climate change. Climatologists have long understood that changes in the Arctic would be faster and more intense than elsewhere on the planet, but the degree and speed of the changes were underestimated compared to recent observations. Policy makers have not yet had time to examine the latest evidence or appreciate the nature of the consequences. Thus, the abruptness and severity of an unfolding Arctic climate crisis has not been incorporated into long-range planning. The purpose of this report is to briefly review the physical basis for global climate change and Arctic amplification, summarize the ongoing observations, discuss the potential consequences, explain the need for an objective risk assessment, develop scenarios for future change, review existing modeling capabilities and the need for better regional models, and finally to make recommendations for Sandia's future role in preparing our leaders to deal with impacts of Arctic climate change on national security. Accurate and credible regional-scale climate models are still several years in the future, and those models are essential for estimating climate impacts around the globe. This study demonstrates how a scenario-based method may be used to give insights into climate impacts on a regional scale and possible mitigation. Because of our experience in the Arctic and widespread recognition of the Arctic's importance in the Earth climate system we chose the Arctic as a test case for an assessment of climate impacts on national security. Sandia can make a swift and significant contribution by applying modeling and simulation tools with internal collaborations as well as with outside organizations. Because changes in the Arctic environment are happening so rapidly, a successful program will be one that can adapt very quickly to new information as it becomes available, and can provide decision makers with projections on the 1-5 year time scale over which the most disruptive, high-consequence changes are likely to occur. The greatest short-term impact would be to initiate exploratory simulations to discover new emergent and robust phenomena associated with one or more of the following changing systems: Arctic hydrological cycle, sea ice extent, ocean and atmospheric circulation, permafrost deterioration, carbon mobilization, Greenland ice sheet stability, and coastal erosion. Sandia can also contribute to new technology solutions for improved observations in the Arctic, which is currently a data-sparse region. Sensitivity analyses have the potential to identify thresholds which would enable the collaborative development of 'early warning' sensor systems to seek predicted phenomena that might be precursory to major, high-consequence changes. Much of this work will require improved regional climate models and advanced computing capabilities. Socio-economic modeling tools can help define human and national security consequences. Formal uncertainty quantification must be an integral part of any results that emerge from this work.
This white paper represents a summary of work intended to lay the foundation for development of a climatological/agent model of climate-induced conflict. The paper combines several loosely-coupled efforts and is the final report for a four-month late-start Laboratory Directed Research and Development (LDRD) project funded by the Advanced Concepts Group (ACG). The project involved contributions by many participants having diverse areas of expertise, with the common goal of learning how to tie together the physical and human causes and consequences of climate change. We performed a review of relevant literature on conflict arising from environmental scarcity. Rather than simply reviewing the previous work, we actively collected data from the referenced sources, reproduced some of the work, and explored alternative models. We used the unfolding crisis in Darfur (western Sudan) as a case study of conflict related to or triggered by climate change, and as an exercise for developing a preliminary concept map. We also outlined a plan for implementing agents in a climate model and defined a logical progression toward the ultimate goal of running both types of models simultaneously in a two-way feedback mode, where the behavior of agents influences the climate and climate change affects the agents. Finally, we offer some ''lessons learned'' in attempting to keep a diverse and geographically dispersed group working together by using Web-based collaborative tools.
The diagonal-mass-matrix spectral element method has proven very successful in geophysical applications dominated by wave propagation. For these problems, the ability to run fully explicit time stepping schemes at relatively high order makes the method more competitive then finite element methods which require the inversion of a mass matrix. The method relies on Gauss-Lobatto points to be successful, since the grid points used are required to produce well conditioned polynomial interpolants, and be high quality 'Gauss-like' quadrature points that exactly integrate a space of polynomials of higher dimension than the number of quadrature points. These two requirements have traditionally limited the diagonal-mass-matrix spectral element method to use square or quadrilateral elements, where tensor products of Gauss-Lobatto points can be used. In non-tensor product domains such as the triangle, both optimal interpolation points and Gauss-like quadrature points are difficult to construct and there are few analytic results. To extend the diagonal-mass-matrix spectral element method to (for example) triangular elements, one must find appropriate points numerically. One successful approach has been to perform numerical searches for high quality interpolation points, as measured by the Lebesgue constant (Such as minimum energy electrostatic points and Fekete points). However, these points typically do not have any Gauss-like quadrature properties. In this work, we describe a new numerical method to look for Gauss-like quadrature points in the triangle, based on a previous algorithm for computing Fekete points. Performing a brute force search for such points is extremely difficult. A common strategy to increase the numerical efficiency of these searches is to reduce the number of unknowns by imposing symmetry conditions on the quadrature points. Motivated by spectral element methods, we propose a different way to reduce the number of unknowns: We look for quadrature formula that have the same number of points as the number of basis functions used in the spectral element method's transform algorithm. This is an important requirement if they are to be used in a diagonal-mass-matrix spectral element method. This restriction allows for the construction of cardinal functions (Lagrange interpolating polynomials). The ability to construct cardinal functions leads to a remarkable expression relating the variation in the quadrature weights to the variation in the quadrature points. This relation in turn leads to an analytical expression for the gradient of the quadrature error with respect to the quadrature points. Thus the quadrature weights have been completely removed from the optimization problem, and we can implement an exact steepest descent algorithm for driving the quadrature error to zero. Results from the algorithm will be presented for the triangle and the sphere.