The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics. Acknowledgements. We thank Danny Dunlavy, Carlos Llosa, Oscar Lopez, Arvind Prasadan, Gary Saavedra, Jeremy Wendt for helpful discussions along the way. Our report was supported by the Laboratory Directed Research and Development program at San- dia National Laboratories, a multimission laboratory managed and operated by National Technol- ogy and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Inter- national, Inc., for the U.S. Department of Energy's National Nuclear Adminstration under contract DE-NA0003525.
Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.
Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
We propose a novel statistical inference paradigm for zero-inflated multiway count data that dispenses with the need to distinguish between true and false zero counts. Our approach ignores all zero entries and applies zero-truncated Poisson regression on the positive counts. Inference is accomplished via tensor completion that imposes low-rank structure on the Poisson parameter space. Our main result shows that an $\textit{N}$-way rank-R parametric tensor 𝓜 ϵ (0, ∞)$I$Χ∙∙∙Χ$I$ generating Poisson observations can be accurately estimated from approximately $IR^2 \text{log}^2_2(I)$ non-zero counts for a nonnegative canonical polyadic decomposition. Several numerical experiments are presented demonstrating that our zero-truncated paradigm is comparable to the ideal scenario where the locations of false zero counts are known $\textit{a priori}$.
Recent advances in neuromorphic algorithm development have shown that neural inspired architectures can efficiently solve scientific computing problems including graph decision problems and partial-integro differential equations (PIDEs). The latter requires the generation of a large number of samples from a stochastic process. While the Monte Carlo approximation of the solution of the PIDEs converges with an increasing number of sampled neuromorphic trajectories, the fidelity of samples from a given stochastic process using neuromorphic hardware requires verification. Such an exercise increases our trust in this emerging hardware and works toward unlocking its energy and scaling efficiency for scientific purposes such as synthetic data generation and stochastic simulation. In this paper, we focus our verification efforts on a one-dimensional Ornstein- Uhlenbeck stochastic differential equation. Using a discrete-time Markov chain approximation, we sample trajectories of the stochastic process across a variety of parameters on an Intel 8- Loihi chip Nahuku neuromorphic platform. Using relative entropy as a verification measure, we demonstrate that the random samples generated on Loihi are, in an average sense, acceptable. Finally, we demonstrate how Loihi's fidelity to the distribution changes as a function of the parameters of the Ornstein- Uhlenbeck equation, highlighting a trade-off between the lower-precision random number generation of the neuromorphic platform and our algorithm's ability to represent a discrete-time Markov chain.
The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.
A mechanical model is introduced for predicting the initiation and evolution of complex fracture patterns without the need for a damage variable or law. The model, a continuum variant of Newton’s second law, uses integral rather than partial differential operators where the region of integration is over finite domain. The force interaction is derived from a novel nonconvex strain energy density function, resulting in a nonmonotonic material model. The resulting equation of motion is proved to be mathematically well-posed. The model has the capacity to simulate nucleation and growth of multiple, mutually interacting dynamic fractures. In the limit of zero region of integration, the model reproduces the classic Griffith model of brittle fracture. As a result, the simplicity of the formulation avoids the need for supplemental kinetic relations that dictate crack growth or the need for an explicit damage evolution law.
The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by the grid cell spatial representation in the brain. The second method tracks the densities of random walkers at each spatial location directly. We analyze the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions.
The rise of low-power neuromorphic hardware has the potential to change high-performance computing; however much of the focus on brain-inspired hardware has been on machine learning applications. A low-power solution for solving partial differential equations could radically change how we approach large-scale computing in the future. The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by grid cells in the brain. The second method tracks the densities of random walkers at each spatial location directly. We present the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions. Finally, we present implementations of these algorithms on neuromorphic hardware.
Traditionally, material identification is performed using global load and displacement data from simple boundary-value problems such as uni-axial tensile and simple shear tests. More recently, however, inverse techniques such as the Virtual Fields Method (VFM) that capitalize on heterogeneous, full-field deformation data have gained popularity. In this work, we have written a VFM code in a finite-deformation framework for calibration of a viscoplastic (i.e. strain-rate dependent) material model for 304L stainless steel. Using simulated experimental data generated via finite-element analysis (FEA), we verified our VFM code and compared the identified parameters with the reference parameters input into the FEA. The identified material model parameters had surprisingly large error compared to the reference parameters, which was traced to parameter covariance and the existence of many essentially equivalent parameter sets. This parameter non-uniqueness and its implications for FEA predictions is discussed in detail. Finally, we present two strategies to reduce parameter covariance – reduced parametrization of the material model and increased richness of the calibration data – which allow for the recovery of a unique solution.
Modeling material and component behavior using finite element analysis (FEA) is critical for modern engineering. One key to a credible model is having an accurate material model, with calibrated model parameters, which describes the constitutive relationship between the deformation and the resulting stress in the material. As such, identifying material model parameters is critical to accurate and predictive FEA. Traditional calibration approaches use only global data (e.g. extensometers and resultant force) and simplified geometries to find the parameters. However, the utilization of rapidly maturing full-field characterization tech- niques (e.g. Digital Image Correlation (DIC)) with inverse techniques (e.g. the Virtual Feilds Method (VFM)) provide a new, novel and improved method for parameter identification. This LDRD tested that idea: in particular, whether more parameters could be identified per test when using full-field data. The research described in this report successfully proves this hypothesis by comparing the VFM results with traditional calibration methods. Important products of the research include: verified VFM codes for identifying model parameters, a new look at parameter covariance in material model parameter estimation, new validation tech- niques to better utilize full-field measurements, and an exploration of optimized specimen design for improved data richness.
We introduce a meshless method for solving both continuous and discrete variational formulations of a volume constrained, non-local diffusion problem. We use the discrete solution to approximate the continuous solution. Our method is non-conforming and uses a localized Lagrange basis that is constructed out of radial basis functions. By verifying that certain inf-sup conditions hold, we demonstrate that both the continuous and discrete problems are well-posed, and also present numerical and theoretical results for the convergence behavior of the method. The stiffness matrix is assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, symmetric matrix. This then is used to find the discretized solution.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
A nonlocal convection-diffusion model is introduced for the master equation of Markov jump processes in bounded domains. With minimal assumptions on the model parameters, the nonlocal steady and unsteady state master equations are shown to be well-posed in a weak sense. Then the nonlocal operator is shown to be the generator of finite-range nonsymmetric jump processes and, when certain conditions on the model parameters hold, the generators of finite and infinite activity Lévy and Lévy-type jump processes are shown to be special instances of the nonlocal operator.
We introduce a meshfree discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. Our method consists of a conforming radial basis of local Lagrange functions for a variational formulation of a volume constrained nonlocal diffusion equation. We also establish an L2 error estimate on the local Lagrange interpolant. The stiffness matrix is assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. We explore approximating the solution to inhomogeneous differential equations by solving inhomogeneous nonlocal integral equations using the proposed radial basis function method.
It is well known that the derivative-based classical approach to strain is problematic when the displacement field is irregular, noisy, or discontinuous. Difficulties arise wherever the displacements are not differentiable. We present an alternative, nonlocal approach to calculating strain from digital image correlation (DIC) data that is well-defined and robust, even for the pathological cases that undermine the classical strain measure. This integral formulation for strain has no spatial derivatives and when the displacement field is smooth, the nonlocal strain and the classical strain are identical. We submit that this approach to computing strains from displacements will greatly improve the fidelity and efficacy of DIC for new application spaces previously untenable in the classical framework.
The purpose of this report is to investigate a partial differential equation (PDE) constrained optimiza- tion approach for estimating the velocity field given image data for use within digital image correlation (DIC). We first introduce the problem and the standard DIC approach and then demonstrate why the DIC problem is ill-posed and introduce a standard regularization of the problem. We also demonstrate that the functional used is sensitive and robust via a sequence of experiments given by a stochastic model inducing the PDE constraint.
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. An immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.
The mathematically correct specification of a fractional differential equation on a bounded domain requires specification of appropriate boundary conditions, or their fractional analogue. This paper discusses the application of nonlocal diffusion theory to specify well-posed fractional diffusion equations on bounded domains.
The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.
The purpose of our paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. Furthermore, this calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.
The purpose of this report is to document a basic installation of the Anasazi eigensolver package and provide a brief discussion on the numerical solution of some graph eigenvalue problems.
The subject of this work is the development of models for the numerical simulation of matter, momentum, and energy balance in heterogeneous materials. These are materials that consist of multiple phases or species or that are structured on some (perhaps many) scale(s). By computational mechanics we mean to refer generally to the standard type of modeling that is done at the level of macroscopic balance laws (mass, momentum, energy). We will refer to the flow or flux of these quantities in a generalized sense as transport. At issue here are the forms of the governing equations in these complex materials which are potentially strongly inhomogeneous below some correlation length scale and are yet homogeneous on larger length scales. The question then becomes one of how to model this behavior and what are the proper multi-scale equations to capture the transport mechanisms across scales. To address this we look to the area of generalized stochastic process that underlie the transport processes in homogeneous materials. The archetypal example being the relationship between a random walk or Brownian motion stochastic processes and the associated Fokker-Planck or diffusion equation. Here we are interested in how this classical setting changes when inhomogeneities or correlations in structure are introduced into the problem. Aspects of non-classical behavior need to be addressed, such as non-Fickian behavior of the mean-squared-displacement (MSD) and non-Gaussian behavior of the underlying probability distribution of jumps. We present an experimental technique and apparatus built to investigate some of these issues. We also discuss diffusive processes in inhomogeneous systems, and the role of the chemical potential in diffusion of hard spheres is considered. Also, the relevance to liquid metal solutions is considered. Finally we present an example of how inhomogeneities in material microstructure introduce fluctuations at the meso-scale for a thermal conduction problem. These fluctuations due to random microstructures also provide a means of characterizing the aleatory uncertainty in material properties at the mesoscale.
Peridynamics is a nonlocal extension of classical continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamics model. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized within LAMMPS. An example problem is also included.
The peridynamic model of solid mechanics treats internal forces within a continuum through interactions across finite distances. These forces are determined through a constitutive model that, in the case of an elastic material, permits the strain energy density at a point to depend on the collective deformation of all the material within some finite distance of it. The forces between points are evaluated from the Frechet derivative of this strain energy density with respect to the deformation map. The resulting equation of motion is an integro-differential equation written in terms of these interparticle forces, rather than the traditional stress tensor field. Recent work on peridynamics has elucidated the energy balance in the presence of these long-range forces. We have derived the appropriate analogue of stress power, called absorbed power, that leads to a satisfactory definition of internal energy. This internal energy is additive, allowing us to meaningfully define an internal energy density field in the body. An expression for the local first law of thermodynamics within peridynamics combines this mechanical component, the absorbed power, with heat transport. The global statement of the energy balance over a subregion can be expressed in a form in which the mechanical and thermal terms contain only interactions between the interior of the subregion and the exterior, in a form anticipated by Noll in 1955. The local form of this first law within peridynamics, coupled with the second law as expressed in the Clausius-Duhem inequality, is amenable to the Coleman-Noll procedure for deriving restrictions on the constitutive model for thermomechanical response. Using an idea suggested by Fried in the context of systems of discrete particles, this procedure leads to a dissipation inequality for peridynamics that has a surprising form. It also leads to a thermodynamically consistent way to treat damage within the theory, shedding light on how damage, including the nucleation and advance of cracks, should be incorporated into a constitutive model.
This report summarizes activities undertaken during FY08-FY10 for the LDRD Peridynamics as a Rigorous Coarse-Graining of Atomistics for Multiscale Materials Design. The goal of our project was to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. The goal of our project is to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. Our coarse-graining overcomes the intrinsic limitation of coupling atomistics with classical continuum mechanics via the FEM (finite element method), SPH (smoothed particle hydrodynamics), or MPM (material point method); namely, that classical continuum mechanics assumes a local force interaction that is incompatible with the nonlocal force model of atomistic methods. Therefore FEM, SPH, and MPM inherit this limitation. This seemingly innocuous dichotomy has far reaching consequences; for example, classical continuum mechanics cannot resolve the short wavelength behavior associated with atomistics. Other consequences include spurious forces, invalid phonon dispersion relationships, and irreconcilable descriptions/treatments of temperature. We propose a statistically based coarse-graining of atomistics via peridynamics and so develop a first of a kind mesoscopic capability to enable consistent, thermodynamically sound, atomistic-to-continuum (AtC) multiscale material simulation. Peridynamics (PD) is a microcontinuum theory that assumes nonlocal forces for describing long-range material interaction. The force interactions occurring at finite distances are naturally accounted for in PD. Moreover, PDs nonlocal force model is entirely consistent with those used by atomistics methods, in stark contrast to classical continuum mechanics. Hence, PD can be employed for mesoscopic phenomena that are beyond the realms of classical continuum mechanics and atomistic simulations, e.g., molecular dynamics and density functional theory (DFT). The latter two atomistic techniques are handicapped by the onerous length and time scales associated with simulating mesoscopic materials. Simulating such mesoscopic materials is likely to require, and greatly benefit from multiscale simulations coupling DFT, MD, PD, and explicit transient dynamic finite element methods FEM (e.g., Presto). The proposed work fills the gap needed to enable multiscale materials simulations.
The peridynamic theory of mechanics attempts to unite the mathematical modeling of continuous media, cracks, and particles within a single framework. It does this by replacing the partial differential equations of the classical theory of solid mechanics with integral or integro-differential equations. These equations are based on a model of internal forces within a body in which material points interact with each other directly over finite distances. The classical theory of solid mechanics is based on the assumption of a continuous distribution of mass within a body. It further assumes that all internal forces are contact forces that act across zero distance. The mathematical description of a solid that follows from these assumptions relies on partial differential equations that additionally assume sufficient smoothness of the deformation for the PDEs to make sense in either their strong or weak forms. The classical theory has been demonstrated to provide a good approximation to the response of real materials down to small length scales, particularly in single crystals, provided these assumptions are met. Nevertheless, technology increasingly involves the design and fabrication of devices at smaller and smaller length scales, even interatomic dimensions. Therefore, it is worthwhile to investigate whether the classical theory can be extended to permit relaxed assumptions of continuity, to include the modeling of discrete particles such as atoms, and to allow the explicit modeling of nonlocal forces that are known to strongly influence the behavior of real materials.
Advanced computing hardware and software written to exploit massively parallel architectures greatly facilitate the computation of extremely large problems. On the other hand, these tools, though enabling higher fidelity models, have often resulted in much longer run-times and turn-around-times in providing answers to engineering problems. The impediments include smaller elements and consequently smaller time steps, much larger systems of equations to solve, and the inclusion of nonlinearities that had been ignored in days when lower fidelity models were the norm. The research effort reported focuses on the accelerating the analysis process for structural dynamics though combinations of model reduction and mitigation of some factors that lead to over-meshing.
Peridynamics is a nonlocal formulation of continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamic model. This document details the implementation of a discrete peridynamic model within the LAMMPS molecular dynamic code. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized, and overviews the LAMMPS implementation. A nontrivial example problem is also included.
This paper describes an elegant statistical coarse-graining of molecular dynamics at finite temperature into peridynamics, a continuum theory. Peridynamics is an efficient alternative to molecular dynamics enabling dynamics at larger length and time scales. In direct analogy with molecular dynamics, peridynamics uses a nonlocal model of force and does not employ stress/strain relationships germane to classical continuum mechanics. In contrast with classical continuum mechanics, the peridynamic representation of a system of linear springs and masses is shown to have the same dispersion relation as the original spring-mass system.
This report is a collection of documents written as part of the Laboratory Directed Research and Development (LDRD) project A Mathematical Framework for Multiscale Science and Engineering: The Variational Multiscale Method and Interscale Transfer Operators. We present developments in two categories of multiscale mathematics and analysis. The first, continuum-to-continuum (CtC) multiscale, includes problems that allow application of the same continuum model at all scales with the primary barrier to simulation being computing resources. The second, atomistic-to-continuum (AtC) multiscale, represents applications where detailed physics at the atomistic or molecular level must be simulated to resolve the small scales, but the effect on and coupling to the continuum level is frequently unclear.
We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.
The solution of the governing steady transport equations for momentum, heat and mass transfer in fluids undergoing non-equilibrium chemical reactions can be extremely challenging. The difficulties arise from both the complexity of the nonlinear solution behavior as well as the nonlinear, coupled, non-symmetric nature of the system of algebraic equations that results from spatial discretization of the PDEs. In this paper, we briefly review progress on developing a stabilized finite element (FE) capability for numerical solution of these challenging problems. The discussion considers the stabilized FE formulation for the low Mach number Navier-Stokes equations with heat and mass transport with non-equilibrium chemical reactions, and the solution methods necessary for detailed analysis of these complex systems. The solution algorithms include robust nonlinear and linear solution schemes, parameter continuation methods, and linear stability analysis techniques. Our discussion considers computational efficiency, scalability, and some implementation issues of the solution methods. Computational results are presented for a CFD benchmark problem as well as for a number of large-scale, 2D and 3D, engineering transport/reaction applications.
This paper analyzes the accuracy of the shift-invert Lanczos iteration for computing eigenpairs of the symmetric definite generalized eigenvalue problem. We provide bounds for the accuracy of the eigenpairs produced by shift-invert Lanczos given a residual reduction. We discuss the implications of our analysis for practical shift-invert Lanczos iterations. When the generalized eigenvalue problem arises from a conforming finite element method, we also comment on the uniform accuracy of bounds (independent of the mesh size h).
Modal analysis of three-dimensional structures frequently involves finite element discretizations with millions of unknowns and requires computing hundreds or thousands of eigenpairs. In this presentation we review methods based on domain decomposition for such eigenspace computations in structural dynamics. We distinguish approaches that solve the eigenproblem algebraically (with minimal connections to the underlying partial differential equation) from approaches that tightly couple the eigensolver with the partial differential equation.
The solution of the governing steady transport equations for momentum, heat and mass transfer in fluids undergoing non-equilibrium chemical reactions can be extremely challenging. The difficulties arise from both the complexity of the nonlinear solution behavior as well as the nonlinear, coupled, non-symmetric nature of the system of algebraic equations that results from spatial discretization of the PDEs. In this paper, we briefly review progress on developing a stabilized finite element ( FE) capability for numerical solution of these challenging problems. The discussion considers the stabilized FE formulation for the low Mach number Navier-Stokes equations with heat and mass transport with non-equilibrium chemical reactions, and the solution methods necessary for detailed analysis of these complex systems. The solution algorithms include robust nonlinear and linear solution schemes, parameter continuation methods, and linear stability analysis techniques. Our discussion considers computational efficiency, scalability, and some implementation issues of the solution methods. Computational results are presented for a CFD benchmark problem as well as for a number of large-scale, 2D and 3D, engineering transport/reaction applications.
Existing approaches in multiscale science and engineering have evolved from a range of ideas and solutions that are reflective of their original problem domains. As a result, research in multiscale science has followed widely diverse and disjoint paths, which presents a barrier to cross pollination of ideas and application of methods outside their application domains. The status of the research environment calls for an abstract mathematical framework that can provide a common language to formulate and analyze multiscale problems across a range of scientific and engineering disciplines. In such a framework, critical common issues arising in multiscale problems can be identified, explored and characterized in an abstract setting. This type of overarching approach would allow categorization and clarification of existing models and approximations in a landscape of seemingly disjoint, mutually exclusive and ad hoc methods. More importantly, such an approach can provide context for both the development of new techniques and their critical examination. As with any new mathematical framework, it is necessary to demonstrate its viability on problems of practical importance. At Sandia, lab-centric, prototype application problems in fluid mechanics, reacting flows, magnetohydrodynamics (MHD), shock hydrodynamics and materials science span an important subset of DOE Office of Science applications and form an ideal proving ground for new approaches in multiscale science.
The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. In particular, our goal is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific applications. Our emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all abstract interfaces. Trilinos uses a two-level software structure designed around collections of packages. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. Packages exist underneath the Trilinos top level, which provides a common look-and-feel, including configuration, documentation, licensing, and bug-tracking. Trilinos packages are primarily written in C++, but provide some C and Fortran user interface support. We provide an open architecture that allows easy integration with other solver packages and we deliver our software to the outside community via the Gnu Lesser General Public License (LGPL). This report provides an overview of Trilinos, discussing the objectives, history, current development and future plans of the project.
LOCA, the Library of Continuation Algorithms, is a software library for performing stability analysis of large-scale applications. LOCA enables the tracking of solution branches as a function of a system parameter, the direct tracking of bifurcation points, and, when linked with the ARPACK library, a linear stability analysis capability. It is designed to be easy to implement around codes that already use Newton's method to converge to steady-state solutions. The algorithms are chosen to work for large problems, such as those that arise from discretizations of partial differential equations, and to run on distributed memory parallel machines. This manual presents LOCA's continuation and bifurcation analysis algorithms, and instructions on how to implement LOCA with an application code. The LOCA code is being made publicly available at www.cs.sandia.gov/loca.