Publications

142 Results
Skip to search filters

Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

SIAM/ASA Journal on Uncertainty Quantification

D'Elia, Marta D.; Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.; Rajamanickam, Sivasankaran R.

Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble when applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.

More Details

Ensemble grouping strategies for embedded stochastic collocation methods applied to anisotropic diffusion problems

SIAM-ASA Journal on Uncertainty Quantification

D'Elia, Marta D.; Edwards, Harold C.; Hu, J.; Phipps, Eric T.; Rajamanickam, Sivasankaran R.

Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162-C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble when applying iterative linear solvers to parameterized and stochastic linear systems. In this work we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen-Loève expansions. We demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.

More Details

Trends in Data Locality Abstractions for HPC Systems

IEEE Transactions on Parallel and Distributed Systems

Unat, Didem; Dubey, Anshu; Hoefler, Torsten; Shalf, John B.; Abraham, Mark; Bianco, Mauro; Chamberlain, Bradford L.; Cledat, Romain; Edwards, Harold C.; Finkel, Hal; Fuerlinger, Karl; Hannig, Frank; Jeannot, Emmanuel; Kamil, Amir; Keasler, Jeff; Kelly, Paul H.J.; Leung, Vitus J.; Ltaief, Hatem; Maruyama, Naoya; Newburn, Chris J.; Pericas, Miquel

More Details

Kokkos' Task DAG Capabilities

Edwards, Harold C.; Ibanez-Granados, Daniel A.

This report documents the ASC/ATDM Kokkos deliverable "Production Portable Dy- namic Task DAG Capability." This capability enables applications to create and execute a dynamic task DAG ; a collection of heterogeneous computational tasks with a directed acyclic graph (DAG) of "execute after" dependencies where tasks and their dependencies are dynamically created and destroyed as tasks execute. The Kokkos task scheduler executes the dynamic task DAG on the target execution resource; e.g. a multicore CPU, a manycore CPU such as Intel's Knights Landing (KNL), or an NVIDIA GPU. Several major technical challenges had to be addressed during development of Kokkos' Task DAG capability: (1) portability to a GPU with it's simplified hardware and micro- runtime, (2) thread-scalable memory allocation and deallocation from a bounded pool of memory, (3) thread-scalable scheduler for dynamic task DAG, (4) usability by applications.

More Details

Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

SIAM Journal on Scientific Computing

Phipps, Eric T.; D'Elia, Marta D.; Edwards, Harold C.; Hoemmen, M.; Hu, J.; Rajamanickam, Sivasankaran R.

Quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar, and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in an embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability, and scalability for the approach applied to the simulation of partial differential equations on a variety of multicore and manycore architectures, including up to 16,384 cores on a Cray XK7 (Titan).

More Details

Kokkos/Qthreads task-parallel approach to linear algebra based graph analytics

2016 IEEE High Performance Extreme Computing Conference, HPEC 2016

Wolf, Michael W.; Edwards, Harold C.; Olivier, Stephen L.

The Graph BLAS effort to standardize a set of graph algorithms building blocks in terms of linear algebra primitives promises to deliver high performing graph algorithms and greatly impact the analysis of big data. However, there are challenges with this approach, which our data analytics miniapp miniTri exposes. In this paper, we improve upon a previously proposed task-parallel approach to linear algebra-based miniTri formulation, addressing these challenges and describing a Kokkos/Qthreads task-parallel implementation that performs as well or slightly better than the highly optimized, baseline OpenMP data-parallel implementation.

More Details

Hierarchical Task-Data Parallelism using Kokkos and Qthreads

Edwards, Harold C.; Olivier, Stephen L.; Berry, Jonathan W.; Mackey, Greg; Rajamanickam, Sivasankaran R.; Wolf, Michael W.; Kim, Kyungjoo K.; Stelle, George

This report describes a new capability for hierarchical task-data parallelism using Sandia's Kokkos and Qthreads, and evaluation of this capability with sparse matrix Cholesky factor- ization and social network triangle enumeration mini-applications. Hierarchical task-data parallelism consists of a collection of tasks with executes-after dependences where each task contains data parallel operations performed on a team of hardware threads. The collection of tasks and dependences form a directed acyclic graph of tasks - a task DAG . Major chal- lenges of this research and development effort include: portability and performance across multicore CPU; manycore Intel Xeon Phi, and NVIDIA GPU architectures; scalability with respect to hardware concurrency and size of the task DAG; and usability of the application programmer interface (API).

More Details

Task Parallel Incomplete Cholesky Factorization using 2D Partitioned-Block Layout

Kim, Kyungjoo K.; Rajamanickam, Sivasankaran R.; Stelle, George; Edwards, Harold C.; Olivier, Stephen L.

We introduce a task-parallel algorithm for sparse incomplete Cholesky factorization that utilizes a 2D sparse partitioned-block layout of a matrix. Our factorization algorithm follows the idea of algorithms-by-blocks by using the block layout. The algorithm-byblocks approach induces a task graph for the factorization. These tasks are inter-related to each other through their data dependences in the factorization algorithm. To process the tasks on various manycore architectures in a portable manner, we also present a portable tasking API that incorporates different tasking backends and device-specific features using an open-source framework for manycore platforms i.e., Kokkos. A performance evaluation is presented on both Intel Sandybridge and Xeon Phi platforms for matrices from the University of Florida sparse matrix collection to illustrate merits of the proposed task-based factorization. Experimental results demonstrate that our task-parallel implementation delivers about 26.6x speedup (geometric mean) over single-threaded incomplete Choleskyby- blocks and 19.2x speedup over serial Cholesky performance which does not carry tasking overhead using 56 threads on the Intel Xeon Phi processor for sparse matrices arising from various application problems.

More Details

ASC Trilab L2 Codesign Milestone 2015

Trott, Christian R.; Hammond, Simon D.; Dinge, Dennis D.; Lin, Paul L.; Vaughan, Courtenay T.; Cook, Jeanine C.; Rajan, Mahesh R.; Edwards, Harold C.; Hoekstra, Robert J.

For the FY15 ASC L2 Trilab Codesign milestone Sandia National Laboratories performed two main studies. The first study investigated three topics (performance, cross-platform portability and programmer productivity) when using OpenMP directives and the RAJA and Kokkos programming models available from LLNL and SNL respectively. The focus of this first study was the LULESH mini-application developed and maintained by LLNL. In the coming sections of the report the reader will find performance comparisons (and a demonstration of portability) for a variety of mini-application implementations produced during this study with varying levels of optimization. Of note is that the implementations utilized including optimizations across a number of programming models to help ensure claims that Kokkos can provide native-class application performance are valid. The second study performed during FY15 is a performance assessment of the MiniAero mini-application developed by Sandia. This mini-application was developed by the SIERRA Thermal-Fluid team at Sandia for the purposes of learning the Kokkos programming model and so is available in only a single implementation. For this report we studied its performance and scaling on a number of machines with the intent of providing insight into potential performance issues that may be experienced when similar algorithms are deployed on the forthcoming Trinity ASC ATS platform.

More Details

ASC-ATDM Performance Portability Requirements for 2015-2019

Edwards, Harold C.; Trott, Christian R.

This report outlines the research, development, and support requirements for the Advanced Simulation and Computing (ASC ) Advanced Technology, Development, and Mitigation (ATDM) Performance Portability (a.k.a., Kokkos) project for 2015 - 2019 . The research and development (R&D) goal for Kokkos (v2) has been to create and demonstrate a thread - parallel programming model a nd standard C++ library - based implementation that enables performance portability across diverse manycore architectures such as multicore CPU, Intel Xeon Phi, and NVIDIA Kepler GPU. This R&D goal has been achieved for algorithms that use data parallel pat terns including parallel - for, parallel - reduce, and parallel - scan. Current R&D is focusing on hierarchical parallel patterns such as a directed acyclic graph (DAG) of asynchronous tasks where each task contain s nested data parallel algorithms. This five y ear plan includes R&D required to f ully and performance portably exploit thread parallelism across current and anticipated next generation platforms (NGP). The Kokkos library is being evaluated by many projects exploring algorithm s and code design for NGP. Some production libraries and applications such as Trilinos and LAMMPS have already committed to Kokkos as their foundation for manycore parallelism an d performance portability. These five year requirements includes support required for current and antic ipated ASC projects to be effective and productive in their use of Kokkos on NGP. The greatest risk to the success of Kokkos and ASC projects relying upon Kokkos is a lack of staffing resources to support Kokkos to the degree needed by these ASC projects. This support includes up - to - date tutorials, documentation, multi - platform (hardware and software stack) testing, minor feature enhancements, thread - scalable algorithm consulting, and managing collaborative R&D.

More Details

Programming Abstractions for Data Locality

Tate, Adrian T.; Kamil, Amir K.; Dubey, Anshu D.; Groblinger, Armin G.; Chamberlain, Brad C.; Goglin, Brice G.; Edwards, Harold C.; Newburn, Chris J.; Padua, David A.; Unat, Didem U.; Jeannot, Emmanuel J.; Hannig, Frank H.; Tobias, Gysi T.; Ltaief, Hatem L.; Sexton, James S.; Labarta, Jesus L.; Shalf, John S.; Fuerlinger, Karl F.; O'Brien, Kathryn O.; Linardakis, Leonidas L.; Besta, Maciej B.; Sawley, Marie-Christine S.; Abraham, Mark A.; Bianco, Mauro B.; Pericas, Miquel P.; Maruyama, Naoya M.; Kelly, Paul H.; Messmer, Peter M.; Ross, Robert B.; Ciedat, Romain C.; Matsuoka, Satoshi M.; Schulthess, Thomas S.; Hoefler, Torsten H.; Leung, Vitus J.

The goal of the workshop and this report is to identify common themes and standardize concepts for locality-preserving abstractions for exascale programming models.

More Details

Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification

Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and many core architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby uncertainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communication. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

More Details

NEAMS Nuclear Waste Management IPSC : evaluation and selection of tools for the quality environment

Vigil, Dena V.; Edwards, Harold C.; Bouchard, Julie F.; Stubblefield, W.A.

The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Nuclear Waste Management Integrated Performance and Safety Codes (NEAMS Nuclear Waste Management IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. These M&S capabilities are to be managed, verified, and validated within the NEAMS Nuclear Waste Management IPSC quality environment. M&S capabilities and the supporting analysis workflow and simulation data management tools will be distributed to end-users from this same quality environment. The same analysis workflow and simulation data management tools that are to be distributed to end-users will be used for verification and validation (V&V) activities within the quality environment. This strategic decision reduces the number of tools to be supported, and increases the quality of tools distributed to end users due to rigorous use by V&V activities. This report documents an evaluation of the needs, options, and tools selected for the NEAMS Nuclear Waste Management IPSC quality environment. The objective of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation Nuclear Waste Management Integrated Performance and Safety Codes (NEAMS Nuclear Waste Management IPSC) program element is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to assess quantitatively the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. This objective will be fulfilled by acquiring and developing M&S capabilities, and establishing a defensible level of confidence in these M&S capabilities. The foundation for assessing the level of confidence is based upon the rigor and results from verification, validation, and uncertainty quantification (V&V and UQ) activities. M&S capabilities are to be managed, verified, and validated within the NEAMS Nuclear Waste Management IPSC quality environment. M&S capabilities and the supporting analysis workflow and simulation data management tools will be distributed to end-users from this same quality environment. The same analysis workflow and simulation data management tools that are to be distributed to end-users will be used for verification and validation (V&V) activities within the quality environment. This strategic decision reduces the number of tools to be supported, and increases the quality of tools distributed to end users due to rigorous use by V&V activities. NEAMS Nuclear Waste Management IPSC V&V and UQ practices and evidence management goals are documented in the V&V Plan. This V&V plan includes a description of the quality environment into which M&S capabilities are imported and V&V and UQ activities are managed. The first phase of implementing the V&V plan is to deploy an initial quality environment through the acquisition and integration of a set of software tools. An evaluation of the needs, options, and tools selected for the quality environment is given in this report.

More Details

Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration

Freeze, Geoffrey A.; Arguello, Jose G.; Bouchard, Julie F.; Criscenti, Louise C.; Dewers, Thomas D.; Edwards, Harold C.; Sassani, David C.; Schultz, Peter A.; Wang, Yifeng

This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

More Details

Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1

Edwards, Harold C.; Arguello, Jose G.; Bartlett, Roscoe B.; Bouchard, Julie F.; Freeze, Geoffrey A.; Knupp, Patrick K.; Schultz, Peter A.; Urbina, Angel U.; Wang, Yifeng

The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.

More Details

Improving performance via mini-applications

Doerfler, Douglas W.; Crozier, Paul C.; Edwards, Harold C.; Williams, Alan B.; Rajan, Mahesh R.; Keiter, Eric R.; Thornquist, Heidi K.

Application performance is determined by a combination of many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, we find that the use of mini-applications - small self-contained proxies for real applications - is an excellent approach for rapidly exploring the parameter space of all these choices. Furthermore, use of mini-applications enriches the interaction between application, library and computer system developers by providing explicit functioning software and concrete performance results that lead to detailed, focused discussions of design trade-offs, algorithm choices and runtime performance issues. In this paper we discuss a collection of mini-applications and demonstrate how we use them to analyze and improve application performance on new and future computer platforms.

More Details

Two-way coupling of Presto v2.8 and CTH v8.1

Edwards, Harold C.; Crawford, D.A.; Bishop, Joseph E.

A loose two-way coupling of SNL's Presto v2.8 and CTH v8.1 analysis code has been developed to support the analysis of explosive loading of structures. Presto is a Lagrangian, three-dimensional explicit, transient dynamics code in the SIERRA mechanics suite for the analysis of structures subjected to impact-like loads. CTH is a hydro code for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A fundamental assumption in this loose coupling is that the compliance of the structure modeled with Presto is significantly smaller than the compliance of the surrounding medium (e.g. air) modeled with CTH. A current limitation of the coupled code is that the interaction between CTH and thin structures modeled in Presto (e.g. shells) is not supported. Research is in progress to relax this thin-structure limitation.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan part 2 mappings for the ASC software quality engineering practices, version 2.0

Boucheron, Edward A.; Sturtevant, Judy E.; Drake, Richard R.; Edwards, Harold C.; Forsythe, Christi A.; Heaphy, Robert T.; Hodges, Ann L.; Minana, Molly A.; Pavlakos, Constantine P.; Schofield, Joseph R.

The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR001.3.2 and CPR001.3.6 and to a Department of Energy document, ''ASCI Software Quality Engineering: Goals, Principles, and Guidelines''. This document also identifies ASC management and software project teams' responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1: ASC software quality engineering practices, Version 2.0

Drake, Richard R.; Sturtevant, Judy E.; Boucheron, Edward A.; Edwards, Harold C.; Minana, Molly A.; Forsythe, Christi A.; Heaphy, Robert T.; Hodges, Ann L.; Pavlakos, Constantine P.; Schofield, Joseph R.

The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0

Boucheron, Edward A.; Schofield, Joseph R.; Drake, Richard R.; Edwards, Harold C.; Minana, Molly A.; Forsythe, Christi A.; Heaphy, Robert T.; Hodges, Ann L.; Pavlakos, Constantine P.; Sturtevant, Judy E.

The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

More Details

SIERRA Framework Version 3: h-Adaptivity Design and Use

Stewart, James R.; Stewart, James R.; Edwards, Harold C.

This paper presents a high-level overview of the algorithms and supporting functionality provided by SIERRA Framework Version 3 for h-adaptive finite-element mechanics application development. Also presented is a fairly comprehensive description of what is required by the application codes to use the SIERRA h-adaptivity services. In general, the SIERRA framework provides the functionality for hierarchically subdividing elements in a distributed parallel environment, as well as dynamic load balancing. The mechanics application code is required to supply an a posteriori error indicator, prolongation and restriction operators for the field variables, hanging-node constraint handlers, and execution control code. This paper does not describe the Application Programming Interface (API), although references to SIERRA framework classes are given where appropriate.

More Details

SIERRA Framework Version 3: Core Services Theory and Design

Edwards, Harold C.

The SIERRA Framework core services provide essential services for managing the mesh data structure, computational fields, and physics models of an application. An application using these services will supply a set of physics models, define the computational fields that are required by those models, and define the mesh upon which its physics models operate. The SIERRA Framework then manages all of the data for a massively parallel multiphysics application.

More Details
142 Results
142 Results