Publications

Results 1–200 of 254
Skip to search filters

GDSA Framework Development and Process Model Integration FY2022

Mariner, Paul M.; Debusschere, Bert D.; Fukuyama, David E.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Perry, Frank V.; Swiler, Laura P.; TACONI, ANNA M.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (Sassani et al. 2021). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media. This report describes fiscal year (FY) 2022 advances of the Geologic Disposal Safety Assessment (GDSA) performance assessment (PA) development groups of the SFWST Campaign. The common mission of these groups is to develop a geologic disposal system modeling capability for nuclear waste that can be used to assess probabilistically the performance of generic disposal options and generic sites. The modeling capability under development is called GDSA Framework (pa.sandia.gov). GDSA Framework is a coordinated set of codes and databases designed for probabilistically simulating the release and transport of disposed radionuclides from a repository to the biosphere for post-closure performance assessment. Primary components of GDSA Framework include PFLOTRAN to simulate the major features, events, and processes (FEPs) over time, Dakota to propagate uncertainty and analyze sensitivities, meshing codes to define the domain, and various other software for rendering properties, processing data, and visualizing results.

More Details

Probabilistic Nanomagnetic Memories for Uncertain and Robust Machine Learning

Bennett, Christopher H.; Xiao, Tianyao X.; Liu, Samuel L.; Humphrey, Leonard H.; Incorvia, Jean A.; Debusschere, Bert D.; Ries, Daniel R.; Agarwal, Sapan A.

This project evaluated the use of emerging spintronic memory devices for robust and efficient variational inference schemes. Variational inference (VI) schemes, which constrain the distribution for each weight to be a Gaussian distribution with a mean and standard deviation, are a tractable method for calculating posterior distributions of weights in a Bayesian neural network such that this neural network can also be trained using the powerful backpropagation algorithm. Our project focuses on domain-wall magnetic tunnel junctions (DW-MTJs), a powerful multi-functional spintronic synapse design that can achieve low power switching while also opening the pathway towards repeatable, analog operation using fabricated notches. Our initial efforts to employ DW-MTJs as an all-in-one stochastic synapse with both a mean and standard deviation didn’t end up meeting the quality metrics for hardware-friendly VI. In the future, new device stacks and methods for expressive anisotropy modification may make this idea still possible. However, as a fall back that immediately satisfies our requirements, we invented and detailed how the combination of a DW-MTJ synapse encoding the mean and a probabilistic Bayes-MTJ device, programmed via a ferroelectric or ionically modifiable layer, can robustly and expressively implement VI. This design includes a physics-informed small circuit model, that was scaled up to perform and demonstrate rigorous uncertainty quantification applications, up to and including small convolutional networks on a grayscale image classification task, and larger (Residual) networks implementing multi-channel image classification. Lastly, as these results and ideas all depend upon the idea of an inference application where weights (spintronic memory states) remain non-volatile, the retention of these synapses for the notched case was further interrogated. These investigations revealed and emphasized the importance of both notch geometry and anisotropy modification in order to further enhance the endurance of written spintronic states. In the near future, these results will be mapped to effective predictions for room temperature and elevated operation DW-MTJ memory retention, and experimentally verified when devices become available.

More Details

UQTk Version 3.1.2 User Manual

Sargsyan, Khachik S.; Safta, Cosmin S.; Boll, Luke D.; Johnston, Katherine J.; Khalil, Mohammad K.; Chowdhary, Kamaljit S.; Rai, Prashant R.; Casey, Tiernan A.; Zeng, Xiaoshu Z.; Debusschere, Bert D.

The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.1.2 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

More Details

GDSA Framework Development and Process Model Integration FY2021

Mariner, Paul M.; Berg, Timothy M.; Debusschere, Bert D.; Eckert, Aubrey C.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Mills, Melissa M.; Nole, Michael A.; Park, Heeho D.; Perry, F.V.; Seidl, Daniel T.; Swiler, Laura P.; Chang, Kyung W.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and highlevel nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (DOE 2012, Table 6; Sevougian et al. 2019). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media.

More Details

Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) (Final Report)

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.

More Details

Science & Engineering of Cyber Security by Uncertainty Quantification and Rigorous Experimentation (SECURE) HANDBOOK

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

Abstract not provided.

Exploration of multifidelity UQ sampling strategies for computer network applications

International Journal for Uncertainty Quantification

Geraci, Gianluca G.; Crussell, Jonathan C.; Swiler, Laura P.; Debusschere, Bert D.

Network modeling is a powerful tool to enable rapid analysis of complex systems that can be challenging to study directly using physical testing. Two approaches are considered: emulation and simulation. The former runs real software on virtualized hardware, while the latter mimics the behavior of network components and their interactions in software. Although emulation provides an accurate representation of physical networks, this approach alone cannot guarantee the characterization of the system under realistic operative conditions. Operative conditions for physical networks are often characterized by intrinsic variability (payload size, packet latency, etc.) or a lack of precise knowledge regarding the network configuration (bandwidth, delays, etc.); therefore uncertainty quantification (UQ) strategies should be also employed. UQ strategies require multiple evaluations of the system with a number of evaluation instances that roughly increases with the problem dimensionality, i.e., the number of uncertain parameters. It follows that a typical UQ workflow for network modeling based on emulation can easily become unattainable due to its prohibitive computational cost. In this paper, a multifidelity sampling approach is discussed and applied to network modeling problems. The main idea is to optimally fuse information coming from simulations, which are a low-fidelity version of the emulation problem of interest, in order to decrease the estimator variance. By reducing the estimator variance in a sampling approach it is usually possible to obtain more reliable statistics and therefore a more reliable system characterization. Several network problems of increasing difficulty are presented. For each of them, the performance of the multifidelity estimator is compared with respect to the single fidelity counterpart, namely, Monte Carlo sampling. For all the test problems studied in this work, the multifidelity estimator demonstrated an increased efficiency with respect to MC.

More Details

Progress in Deep Geologic Disposal Safety Assessment in the U.S. since 2010

Mariner, Paul M.; Connolly, Laura A.; Cunningham, Leigh C.; Debusschere, Bert D.; Dobson, David C.; Frederick, Jennifer M.; Hammond, Glenn E.; Jordan, Spencer H.; LaForce, Tara; Nole, Michael A.; Park, Heeho D.; Perry, Frank V.; Rogers, Ralph D.; Seidl, Daniel T.; Sevougian, Stephen D.; Stein, Emily S.; Swift, Peter N.; Swiler, Laura P.; Vo, Jonathan V.; Wallace, Michael G.

Abstract not provided.

Exploration of multifidelity approaches for uncertainty quantification in network applications

Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering, UNCECOMP 2019

Geraci, Gianluca G.; Swiler, Laura P.; Crussell, Jonathan C.; Debusschere, Bert D.

Communication networks have evolved to a level of sophistication that requires computer models and numerical simulations to understand and predict their behavior. A network simulator is a software that enables the network designer to model several components of a computer network such as nodes, routers, switches and links and events such as data transmissions and packet errors in order to obtain device and network level metrics. Network simulations, as many other numerical approximations that model complex systems, are subject to the specification of parameters and operative conditions of the system. Very often the full characterization of the system and their input is not possible, therefore Uncertainty Quantification (UQ) strategies need to be deployed to evaluate the statistics of its response and behavior. UQ techniques, despite the advancements in the last two decades, still suffer in the presence of a large number of uncertain variables and when the regularity of the systems response cannot be guaranteed. In this context, multifidelity approaches have gained popularity in the UQ community recently due to their flexibility and robustness with respect to these challenges. The main idea behind these techniques is to extract information from a limited number of high-fidelity model realizations and complement them with a much larger number of a set of lower fidelity evaluations. The final result is an estimator with a much lower variance, i.e. a more accurate and reliable estimator can be obtained. In this contribution we investigate the possibility to deploy multifidelity UQ strategies to computer network analysis. Two numerical configurations are studied based on a simplified network with one client and one server. Preliminary results for these tests suggest that multifidelity sampling techniques might be used as effective tools for UQ tools in network applications.

More Details

UQTk Version 3.0.4 User Manual

Sargsyan, Khachik S.; Safta, Cosmin S.; Chowdhary, Kamaljit S.; Castorena, Sarah C.; de Bord, Sarah d.; Debusschere, Bert D.

The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.4 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

More Details

Uncertainty quantification toolkit (UQTk)

Handbook of Uncertainty Quantification

Debusschere, Bert D.; Sargsyan, Khachik S.; Safta, Cosmin S.; Chowdhary, Kenny

The UQ Toolkit (UQTk) is a collection of tools for uncertainty quantification, ranging from intrusive and nonintrusive forward propagation of uncertainty to inverse problems and sensitivity analysis. This chapter first outlines the UQTk design philosophy, followed by an overview of the available methods and the way they are implemented in UQTk. The second part of this chapter is a detailed example that illustrates a UQ workflow from surrogate construction, and calibration, to forward propagation and attribution.

More Details

Intrusive polynomial chaos methods for forward uncertainty propagation

Handbook of Uncertainty Quantification

Debusschere, Bert D.

Polynomial chaos (PC)-based intrusive methods for uncertainty quantification reformulate the original deterministic model equations to obtain a system of equations for the PC coefficients of the model outputs. This system of equations is larger than the original model equations, but solving it once yields the uncertainty information for all quantities in the model. This chapter gives an overview of the literature on intrusive methods, outlines the approach on a general level, and then applies it to a system of three ordinary differential equations that model a surface reaction system. Common challenges and opportunities for intrusive methods are also highlighted.

More Details

UQTk Version 3.0.3 User Manual

Sargsyan, Khachik S.; Safta, Cosmin S.; Chowdhary, Kamaljit S.; Castorena, Sarah C.; de Bord, Sarah d.; Debusschere, Bert D.

The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

More Details

Performance scaling variability and energy analysis for a resilient ULFM-based PDE solver

Proceedings of ScalA 2016: 7th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems - Held in conjunction with SC16: The International Conference for High Performance Computing, Networking, Storage and Analysis

Morris, K.; Rizzi, F.; Cook, B.; Mycek, P.; LeMaitre, O.; Knio, O.M.; Sargsyan, Khachik S.; Dahlgren, K.; Debusschere, Bert D.

We present a resilient task-based domain-decomposition preconditioner for partial differential equations (PDEs) built on top of User Level Fault Mitigation Message Passing Interface (ULFM-MPI). The algorithm reformulates the PDE as a sampling problem, followed by a robust regression-based solution update that is resilient to silent data corruptions (SDCs). We adopt a server-client model where all state information is held by the servers, while clients only serve as computational units. The task-based nature of the algorithm and the capabilities of ULFM complement each other to support missing tasks, making the application resilient to clients failing.We present weak and strong scaling results on Edison, National Energy Research Scientific Computing Center (NERSC), for a nominal and a fault-injected case, showing that even in the presence of faults, scalability tested up to 50k cores is within 90%. We then quantify the variability of weak and strong scaling due to the presence of faults. Finally, we discuss the performance of our application with respect to subdomain size, server/client configuration, and the interplay between energy and resilience.

More Details

Uncertainty Quantification in LES Computations of Turbulent Multiphase Combustion in a Scramjet Engine ? ScramjetUQ ?

Najm, H.N.; Debusschere, Bert D.; Safta, Cosmin S.; Sargsyan, Khachik S.; Huan, Xun H.; Oefelein, Joseph C.; Lacaze, Guilhem M.; Vane, Zachary P.; Eldred, Michael S.; Geraci, Gianluca G.; Knio, Omar K.; Sraj, I.S.; Scovazzi, G.S.; Colomes, O.C.; Marzouk, Y.M.; Zahm, O.Z.; Menhorn, F.M.; Ghanem, R.G.; Tsilifis, P.T.

Abstract not provided.

Uncertainty Quantification in LES Computations of Turbulent Multiphase Combustion in a Scramjet Engine

Najm, H.N.; Debusschere, Bert D.; Safta, Cosmin S.; Sargsyan, Khachik S.; Huan, Xun H.; Oefelein, Joseph C.; Lacaze, Guilhem M.; Vane, Zachary P.; Eldred, Michael S.; Geraci, G.G.; Knio, O.K.; Sraj, I.S.; Scovazzi, G.S.; Colomes, O.C.; Marzouk, Y.M.; Zahm, O.Z.; Augustin, F.A.; Menhorn, F.M.; Ghanem, R.G.; Tsilifis, P.T.

Abstract not provided.

UQTk Version 3.0 User Manual

Sargsyan, Khachik S.; Safta, Cosmin S.; Chowdhary, Kamaljit S.; Castorena, Sarah C.; de Bord, Sarah d.; Debusschere, Bert D.

The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncer- tainty in numerical model predictions. Version 3.0 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity anal- ysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

More Details

Exploring the Interplay of Resilience and Energy Consumption for a Task-Based Partial Differential Equations Preconditioner

Rizzi, Francesco N.; Morris Wright, Karla V.; Sargsyan, Khachik S.; Mycek, Paul M.; Safta, Cosmin S.; Le Maitre, Olivier L.; Knio, Omar K.; Debusschere, Bert D.

We discuss algorithm-based resilience to silent data corruption (SDC) in a task- based domain-decomposition preconditioner for partial differential equations (PDEs). The algorithm exploits a reformulation of the PDE as a sampling problem, followed by a solution update through data manipulation that is resilient to SDC. The imple- mentation is based on a server-client model where all state information is held by the servers, while clients are designed solely as computational units. Scalability tests run up to [?] 51 K cores show a parallel efficiency greater than 90%. We use a 2D elliptic PDE and a fault model based on random single bit-flip to demonstrate the resilience of the application to synthetically injected SDC. We discuss two fault scenarios: one based on the corruption of all data of a target task, and the other involving the corrup- tion of a single data point. We show that for our application, given the test problem considered, a four-fold increase in the number of faults only yields a 2% change in the overhead to overcome their presence, from 7% to 9%. We then discuss potential savings in energy consumption via dynamics voltage/frequency scaling, and its interplay with fault-rates, and application overhead. [?] Sandia National Laboratories, Livermore, CA ( fnrizzi@sandia.gov ). + Sandia National Laboratories, Livermore, CA ( knmorri@sandia.gov ). ++ Sandia National Laboratories, Livermore, CA ( ksargsy@sandia.gov ). SS Duke University, Durham, NC ( paul.mycek@duke.edu ). P Sandia National Laboratories, Livermore, CA ( csafta@sandia.gov ). k Laboratoire d'Informatique pour la M'ecanique et les Sciences de l'Ing'enieur, Orsay, France ( olm@limsi.fr ). [?][?] Duke University, Durham, NC ( omar.knio@duke.edu ). ++ Sandia National Laboratories, Livermore, CA ( bjdebus@sandia.gov ).

More Details

Partial differential equations preconditioner resilient to soft and hard faults

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Rizzi, Francesco N.; Morris Wright, Karla V.; Sargsyan, Khachik S.; Mycek, Paul; Safta, Cosmin S.; Le Maitre, Olivier; Knio, Omar; Debusschere, Bert D.

We present a domain-decomposition-based pre-conditioner for the solution of partial differential equations (PDEs) that is resilient to both soft and hard faults. The algorithm is based on the following steps: first, the computational domain is split into overlapping subdomains, second, the target PDE is solved on each subdomain for sampled values of the local current boundary conditions, third, the subdomain solution samples are collected and fed into a regression step to build maps between the subdomains' boundary conditions, finally, the intersection of these maps yields the updated state at the subdomain boundaries. This reformulation allows us to recast the problem as a set of independent tasks. The implementation relies on an asynchronous server-client framework, where one or more reliable servers hold the data, while the clients ask for tasks and execute them. This framework provides resiliency to hard faults such that if a client crashes, it stops asking for work, and the servers simply distribute the work among all the other clients alive. Erroneous subdomain solves (e.g. due to soft faults) appear as corrupted data, which is either rejected if that causes a task to fail, or is seamlessly filtered out during the regression stage through a suitable noise model. Three different types of faults are modeled: hard faults modeling nodes (or clients) crashing, soft faults occurring during the communication of the tasks between server and clients, and soft faults occurring during task execution. We demonstrate the resiliency of the approach for a 2D elliptic PDE, and explore the effect of the faults at various failure rates.

More Details

Quantification of Uncertainty in Extreme Scale Computations

Debusschere, Bert D.; Jakeman, John D.; Chowdhary, Kamaljit S.; Safta, Cosmin S.; Sargsyan, Khachik S.; Rai, P.R.; Ghanem, R.G.; Knio, O.K.; La Maitre, O.L.; Winokur, J.W.; Li, G.L.; Ghattas, O.G.; Moser, R.M.; Simmons, C.S.; Alexanderian, A.A.; Gattiker, J.G.; Higdon, D.H.; Lawrence, E.L.; Bhat, S.B.; Marzouk, Y.M.; Bigoni, D.B.; Cui, T.C.; Parno, M.P.

Abstract not provided.

Fault Resilient Domain Decomposition Preconditioner for PDEs

Sargsyan, Khachik S.; Sargsyan, Khachik S.; Safta, Cosmin S.; Safta, Cosmin S.; Debusschere, Bert D.; Debusschere, Bert D.; Najm, H.N.; Najm, H.N.; Rizzi, Francesco N.; Rizzi, Francesco N.; Morris Wright, Karla V.; Morris Wright, Karla V.; Mycek, Paul M.; Mycek, Paul M.; Maitre, Olivier L.; Maitre, Olivier L.; Knio, Omar K.; Knio, Omar K.

The move towards extreme-scale computing platforms challenges scientific simula- tions in many ways. Given the recent tendencies in computer architecture development, one needs to reformulate legacy codes in order to cope with large amounts of commu- nication, system faults and requirements of low-memory usage per core. In this work, we develop a novel framework for solving partial differential equa- tions (PDEs) via domain decomposition that reformulates the solution as a state-of- knowledge with a probabilistic interpretation. Such reformulation allows resiliency with respect to potential faults without having to apply fault detection, avoids unnecessary communication and is generally well-positioned for rigorous uncertainty quantification studies that target improvements of predictive fidelity of scientific models. We demon- strate our algorithm for one-dimensional PDE examples where artificial faults have been implemented as bit-flips in the binary representation of subdomain solutions. *Sandia National Laboratories, 7011 East Ave, MS 9051, Livermore, CA 94550 (ksargsy@sandia.gov). t Sandia National Laboratories, Livermore, CA (fnrizzi@sandia.gov). IDuke University, Durham, NC (paul .mycek@duke . edu). Sandia National Laboratories, Livermore, CA (csaft a@sandia.gov). i llSandia National Laboratories, Livermore, CA (knmorri@sandia.gov). II Sandia National Laboratories, Livermore, CA (hnnajm@sandia.gov). **Laboratoire d'Informatique pour la Mecanique et les Sciences de l'Ingenieur, Orsay, France (olm@limsi . f r). ttDuke University, Durham, NC (omar . knio@duke . edu). It Sandia National Laboratories, Livermore, CA (bjdebus@sandia.gov).

More Details

Hybrid discrete/continuum algorithms for stochastic reaction networks

Journal of Computational Physics

Safta, Cosmin S.; Sargsyan, Khachik S.; Debusschere, Bert D.; Najm, H.N.

Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. The performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

More Details

Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

Journal of Aerospace Information Systems

Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.; Chowdhary, Kenny; Debusschere, Bert D.; Swiler, Laura P.; Eldred, Michael S.

In this paper, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory-epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

More Details

Rexsss Performance Analysis: Domain Decomposition Algorithm Implementations for Resilient Numerical Partial Differential Equation Solvers

Dahlgren, Kathryn M.; Rizzi, Francesco N.; Morris Wright, Karla V.; Debusschere, Bert D.

The future of extreme-scale computing is expected to magnify the influence of soft faults as a source of inaccuracy or failure in solutions obtained from distributed parallel computations. The development of resilient computational tools represents an essential recourse for understanding the best methods for absorbing the impacts of soft faults without sacrificing solution accuracy. The Rexsss (Resilient Extreme Scale Scientific Simulations) project pursues the development of fault resilient algorithms for solving partial differential equations (PDEs) on distributed systems. Performance analyses of current algorithm implementations assist in the identification of runtime inefficiencies.

More Details

UQTk version 2.0 user manual

Debusschere, Bert D.; Sargsyan, Khachik S.; Safta, Cosmin S.

The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 2.0 ffers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

More Details

Fundamental issues in the representation and propagation of uncertain equation of state information in shock hydrodynamics

Computers and Fluids

Robinson, Allen C.; Berry, Robert D.; Carpenter, John H.; Debusschere, Bert D.; Drake, Richard R.; Mattsson, A.E.; Rider, William J.

Uncertainty quantification (UQ) deals with providing reasonable estimates of the uncertainties associated with an engineering model and propagating them to final engineering quantities of interest. We present a conceptual UQ framework for the case of shock hydrodynamics with Euler's equations where the uncertainties are assumed to lie principally in the equation of state (EOS). In this paper we consider experimental data as providing both data and an estimate of data uncertainty. We propose a specific Bayesian inference approach for characterizing EOS uncertainty in thermodynamic phase space. We show how this approach provides a natural and efficient methodology for transferring data uncertainty to engineering outputs through an EOS representation that understands and deals consistently with parameter correlations as sensed in the data.Historically, complex multiphase EOSs have been built utilizing tables as the delivery mechanism in order to amortize the cost of creation of the tables over many subsequent continuum scale runs. Once UQ enters into the picture, however, the proper operational paradigm for multiphase tables become much less clear. Using a simple single-phase Mie-Grüneisen model we experiment with several approaches and demonstrate how uncertainty can be represented. We also show how the quality of the tabular representation is of key importance. As a first step, we demonstrate a particular tabular approach for the Mie-Grüneisen model which when extended to multiphase tables should have value for designing a UQ-enabled shock hydrodynamic modeling approach that is not only theoretically sound but also robust, useful, and acceptable to the modeling community. We also propose an approach to separate data uncertainty from modeling error in the EOS. © 2012 Elsevier Ltd.

More Details

Multiparameter spectral representation of noise-induced competence in bacillus subtilis

IEEE/ACM Transactions on Computational Biology and Bioinformatics

Sargsyan, Khachik S.; Safta, Cosmin S.; Debusschere, Bert D.; Najm, H.N.

In this work, the problem of representing a stochastic forward model output with respect to a large number of input parameters is considered. The methodology is applied to a stochastic reaction network of competence dynamics in Bacillus subtilis bacterium. In particular, the dependence of the competence state on rate constants of underlying reactions is investigated. We base our methodology on Polynomial Chaos (PC) spectral expansions that allow effective propagation of input parameter uncertainties to outputs of interest. Given a number of forward model training runs at sampled input parameter values, the PC modes are estimated using a Bayesian framework. As an outcome, these PC modes are described with posterior probability distributions. The resulting expansion can be regarded as an uncertain response function and can further be used as a computationally inexpensive surrogate instead of the original reaction model for subsequent analyses such as calibration or optimization studies. Furthermore, the methodology is enhanced with a classification-based mixture PC formulation that overcomes the difficulties associated with representing potentially nonsmooth input-output relationships. Finally, the global sensitivity analysis based on the multiparameter spectral representation of an observable of interest provides biological insight and reveals the most important reactions and their couplings for the competence dynamics. © 2013 IEEE.

More Details

Efficient uncertainty quantification methodologies for high-dimensional climate land models

Sargsyan, Khachik S.; Safta, Cosmin S.; Berry, Robert D.; Ray, Jaideep R.; Debusschere, Bert D.; Najm, H.N.

In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.

More Details
Results 1–200 of 254
Results 1–200 of 254