Publications

Results 9876–9900 of 9,998
Skip to search filters

Level 1 Peer Review Process for the Sandia ASCI V and V Program: FY01 Final Report

Pilch, Martin P.; Froehlich, G.K.; Hodges, Ann L.; Peercy, David E.; Trucano, Timothy G.; Moya, Jaime L.; Peercy, David E.

This report describes the results of the FY01 Level 1 Peer Reviews for the Verification and Validation (V&V) Program at Sandia National Laboratories. V&V peer review at Sandia is intended to assess the ASCI (Accelerated Strategic Computing Initiative) code team V&V planning process and execution. The Level 1 Peer Review process is conducted in accordance with the process defined in SAND2000-3099. V&V Plans are developed in accordance with the guidelines defined in SAND2000-3 101. The peer review process and process for improving the Guidelines are necessarily synchronized and form parts of a larger quality improvement process supporting the ASCI V&V program at Sandia. During FY00 a prototype of the process was conducted for two code teams and their V&V Plans and the process and guidelines updated based on the prototype. In FY01, Level 1 Peer Reviews were conducted on an additional eleven code teams and their respective V&V Plans. This report summarizes the results from those peer reviews, including recommendations from the panels that conducted the reviews.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Developers Manual (title change from electronic posting)

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.; Giunta, Anthony A.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Evaluation Techniques and Properties of an Exact Solution to a Subsonic Free Surface Jet Flow

Robinson, Allen C.

Computational techniques for the evaluation of steady plane subsonic flows represented by Chaplygin series in the hodograph plane are presented. These techniques are utilized to examine the properties of the free surface wall jet solution. This solution is a prototype for the shaped charge jet, a problem which is particularly difficult to compute properly using general purpose finite element or finite difference continuum mechanics codes. The shaped charge jet is a classic validation problem for models involving high explosives and material strength. Therefore, the problem studied in this report represents a useful verification problem associated with shaped charge jet modeling.

More Details

Assembly of LIGA using Electric Fields

Feddema, John T.; Warne, Larry K.; Johnson, William Arthur.; Routson, Allison J.; Armour, David L.

The goal of this project was to develop a device that uses electric fields to grasp and possibly levitate LIGA parts. This non-contact form of grasping would solve many of the problems associated with grasping parts that are only a few microns in dimensions. Scaling laws show that for parts this size, electrostatic and electromagnetic forces are dominant over gravitational forces. This is why micro-parts often stick to mechanical tweezers. If these forces can be controlled under feedback control, the parts could be levitated, possibly even rotated in air. In this project, we designed, fabricated, and tested several grippers that use electrostatic and electromagnetic fields to grasp and release metal LIGA parts. The eventual use of this tool will be to assemble metal and non-metal LIGA parts into small electromechanical systems.

More Details

General Concepts for Experimental Validation of ASCI Code Applications

Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.

More Details

LOCA 1.0 Library of Continuation Algorithms: Theory and Implementation Manual

Salinger, Andrew G.; Pawlowski, Roger P.; Lehoucq, Richard B.; Romero, L.A.; Wilkes, Edward D.

LOCA, the Library of Continuation Algorithms, is a software library for performing stability analysis of large-scale applications. LOCA enables the tracking of solution branches as a function of a system parameter, the direct tracking of bifurcation points, and, when linked with the ARPACK library, a linear stability analysis capability. It is designed to be easy to implement around codes that already use Newton's method to converge to steady-state solutions. The algorithms are chosen to work for large problems, such as those that arise from discretizations of partial differential equations, and to run on distributed memory parallel machines. This manual presents LOCA's continuation and bifurcation analysis algorithms, and instructions on how to implement LOCA with an application code. The LOCA code is being made publicly available at www.cs.sandia.gov/loca.

More Details

Verification and Validation in Computational Fluid Dynamics

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation (V and V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V and V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V and V, and develops a number of extensions to existing ideas. The review of the development of V and V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V and V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized.

More Details

Molecular Simulation of Reacting Systems

Thompson, Aidan P.

The final report for a Laboratory Directed Research and Development project entitled, Molecular Simulation of Reacting Systems is presented. It describes efforts to incorporate chemical reaction events into the LAMMPS massively parallel molecular dynamics code. This was accomplished using a scheme in which several classes of reactions are allowed to occur in a probabilistic fashion at specified times during the MD simulation. Three classes of reaction were implemented: addition, chain transfer and scission. A fully parallel implementation was achieved using a checkerboarding scheme, which avoids conflicts due to reactions occurring on neighboring processors. The observed chemical evolution is independent of the number of processors used. The code was applied to two test applications: irreversible linear polymerization and thermal degradation chemistry.

More Details

On the Development of the Large Eddy Simulation Approach for Modeling Turbulent Flow: LDRD Final Report

Schmidt, Rodney C.; DesJardin, Paul E.; Voth, Thomas E.; Christon, Mark A.; Kerstein, Alan R.; Wunsch, Scott E.

This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.

More Details

Tetrahedral mesh improvement via optimization of the element condition number

International Journal for Numerical Methods in Engineering

Freitag, Lori A.; Knupp, Patrick K.

We present a new shape measure for tetrahedral elements that is optimal in that it gives the distance of a tetrahedron from the set of inverted elements. This measure is constructed from the condition number of the linear transformation between a unit equilateral tetrahedron and any tetrahedron with positive volume. Using this shape measure, we formulate two optimization objective functions that are differentiated by their goal: the first seeks to improve the average quality of the tetrahedral mesh; the second aims to improve the worst-quality element in the mesh. We review the optimization techniques used with each objective function and present experimental results that demonstrate the effectiveness of the mesh improvement methods. We show that a combined optimization approach that uses both objective functions obtains the best-quality meshes for several complex geometries. Copyright © 2001 John Wiley and Sons, Ltd.

More Details

Compact vs. exponential-size LP relaxations

Operations Research Letters

Carr, Robert D.; Lancia, Giuseppe

In this paper, we illustrate by means of examples a technique for formulating compact (i.e. polynomial-size) linear programming relaxations in place of exponential-size models requiring separation algorithms. In the same vein as a celebrated theorem by Grötschel, Lovász and Schrijver, we state the equivalence of compact separation and compact optimization. Among the examples used to illustrate our technique, we introduce a new formulation for the traveling salesman problem, whose relaxation we show as an equivalent to the subtour elimination relaxation. © 2001 Elsevier Science B.V. All rights reserved.

More Details

User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination

Brannon, Rebecca M.; Brannon, Rebecca M.

The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported.

More Details

An Evaluation of the Material Point Method

Brannon, Rebecca M.; Brannon, Rebecca M.

The theory and algorithm for the Material Point Method (MPM) are documented, with a detailed discussion on the treatments of boundary conditions and shock wave problems. A step-by-step solution scheme is written based on direct inspection of the two-dimensional MPM code currently used at the University of Missouri-Columbia (which is, in turn, a legacy of the University of New Mexico code). To test the completeness of the solution scheme and to demonstrate certain features of the MPM, a one-dimensional MPM code is programmed to solve one-dimensional wave and impact problems, with both linear elasticity and elastoplasticity models. The advantages and disadvantages of the MPM are investigated as compared with competing mesh-free methods. Based on the current work, future research directions are discussed to better simulate complex physical problems such as impact/contact, localization, crack propagation, penetration, perforation, fragmentation, and interactions among different material phases. In particular, the potential use of a boundary layer to enforce the traction boundary conditions is discussed within the framework of the MPM.

More Details

DNA Microarray Technology

Davidson, George S.; Davidson, George S.

Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects.

More Details

Processor allocation on Cplant: Achieving general processor locality using one-dimensional allocation strategies

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Leung, Vitus J.; Arkin, E.M.; Bender, M.A.; Bunde, D.; Johnston, J.; Lal, Alok; Mitchell, J.S.B.; Phillips, C.; Seiden, S.S.

The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and other supercomputers. Users of Cplant and other Sandia supercomputers submit parallel jobs to a job queue. When a job is scheduled to run, it is assigned to a set of processors. To obtain maximum throughput, jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This paper introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in the new release of the Cplant System Software, Version 2.0, phased into the Cplant systems at Sandia by May 2002.

More Details

Statistical Validation of Engineering and Scientific Models: A Maximum Likelihood Based Metric

Hills, Richard G.; Trucano, Timothy G.; Trucano, Timothy G.

Two major issues associated with model validation are addressed here. First, we present a maximum likelihood approach to define and evaluate a model validation metric. The advantage of this approach is it is more easily applied to nonlinear problems than the methods presented earlier by Hills and Trucano (1999, 2001); the method is based on optimization for which software packages are readily available; and the method can more easily be extended to handle measurement uncertainty and prediction uncertainty with different probability structures. Several examples are presented utilizing this metric. We show conditions under which this approach reduces to the approach developed previously by Hills and Trucano (2001). Secondly, we expand our earlier discussions (Hills and Trucano, 1999, 2001) on the impact of multivariate correlation and the effect of this on model validation metrics. We show that ignoring correlation in multivariate data can lead to misleading results, such as rejecting a good model when sufficient evidence to do so is not available.

More Details

A p-Adic Metric for Particle Mass Scale Organization with Genetic Divisors

Wagner, John S.

The concept of genetic divisors can be given a quantitative measure with a non-Archimedean p-adic metric that is both computationally convenient and physically motivated. For two particles possessing distinct mass parameters x and y, the metric distance D(x, y) is expressed on the field of rational numbers Q as the inverse of the greatest common divisor [gcd (x , y)]. As a measure of genetic similarity, this metric can be applied to (1) the mass numbers of particle states and (2) the corresponding subgroup orders of these systems. The use of the Bezout identity in the form of a congruence for the expression of the gcd (x , y) corresponding to the v{sub e} and {sub {mu}} neutrinos (a) connects the genetic divisor concept to the cosmic seesaw congruence, (b) provides support for the {delta}-conjecture concerning the subgroup structure of particle states, and (c) quantitatively strengthens the interlocking relationships joining the values of the prospectively derived (i) electron neutrino (v{sub e}) mass (0.808 meV), (ii) muon neutrino (v{sub {mu}}) mass (27.68 meV), and (iii) unified strong-electroweak coupling constant ({alpha}*{sup -1} = 34.26).

More Details

On the development of a gridless inflation code for parachute simulations

16th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar

Strickland, James H.; Homicz, G.F.; Gossler, A.A.; Porter, V.L.

In this paper the current status of an unsteady 3D parachute simulation code that is being developed at Sandia National Laboratories under the Department of Energy's Accelerated Strategic Computing Initiative (ASCI) is discussed. The Vortex Inflation PARachute code (VIPAR) that embodies this effort is being developed to perform complete numerical simulations of ribbon parachute deployment, inflation, and steady descent utilizing several thousand processors on one of the DOE "teraFLOP" computers. First generation working serial and parallel versions of the uncoupled fluids code that simulate unsteady 3D incompressible flows around bluff bodies with complex geometries have been developed. Preliminary results from the uncoupled fluids code along with the fluid-structure coupling strategy are presented herein.

More Details

Applications of Transport/Reaction Codes to Problems in Cell Modeling

Means, Shawn A.; Rintoul, Mark D.; Shadid, John N.; Rintoul, Mark D.

We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

More Details

Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

Bartel, Timothy J.; Plimpton, Steven J.; Gallis, Michail A.

Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.

More Details

ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0

Brown, Kevin H.; Summers, Randall M.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin W.; Jones, Reese E.; Summers, Randall M.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details
Results 9876–9900 of 9,998
Results 9876–9900 of 9,998