Publications

Results 9726–9750 of 9,998
Skip to search filters

Amesos 1.0 reference guide

Sala, Marzio S.

This document describes the main functionalities of the Amesos package, version 1.0. Amesos, available as part of Trilinos 4.0, provides an object-oriented interface to several serial and parallel sparse direct solvers libraries, for the solution of the linear systems of equations A X = B where A is a real sparse, distributed matrix, defined as an EpetraRowMatrix object, and X and B are defined as EpetraMultiVector objects. Amesos provides a common look-and-feel to several direct solvers, insulating the user from each package's details, such as matrix and vector formats, and data distribution.

More Details

Trilinos 4.0 tutorial

Sala, Marzio S.; Heroux, Michael A.; Day, David M.

The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. The goal of the Trilinos Project is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multiphysics engineering and scientific applications. The emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all the abstract interfaces. This document introduces the use of Trilinos, version 4.0. The presented material includes, among others, the definition of distributed matrices and vectors with Epetra, the iterative solution of linear systems with AztecOO, incomplete factorizations with IF-PACK, multilevel and domain decomposition preconditioners with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. The tutorial is a self-contained introduction, intended to help computational scientists effectively apply the appropriate Trilinos package to their applications. Basic examples are presented that are fit to be imitated. This document is a companion to the Trilinos User's Guide [20] and Trilinos Development Guides [21,22]. Please note that the documentation included in each of the Trilinos' packages is of fundamental importance.

More Details

ML 3.0 smoothed aggregation user's guide

Sala, Marzio S.; Hu, Jonathan J.; Tuminaro, Raymond S.

ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package or to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.

More Details

Containment of uranium in the proposed Egyptian geologic repository for radioactive waste using hydroxyapatite

Hasan, Ahmed H.; Larese, Kathleen C.; Headley, Thomas J.; Zhao, Hongting Z.; Salas, Fred S.

Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp}>10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.

More Details

Taking ASCI supercomputing to the end game

DeBenedictis, Erik

The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

More Details

Molecular simulations of MEMS and membrane coatings (PECASE)

Thompson, Aidan P.

The goal of this Laboratory Directed Research & Development (LDRD) effort was to design, synthesize, and evaluate organic-inorganic nanocomposite membranes for solubility-based separations, such as the removal of higher hydrocarbons from air streams, using experiment and theory. We synthesized membranes by depositing alkylchlorosilanes on the nanoporous surfaces of alumina substrates, using techniques from the self-assembled monolayer literature to control the microstructure. We measured the permeability of these membranes to different gas species, in order to evaluate their performance in solubility-based separations. Membrane design goals were met by manipulating the pore size, alkyl group size, and alkyl surface density. We employed molecular dynamics simulation to gain further understanding of the relationship between membrane microstructure and separation performance.

More Details

Verification of Euler/Navier-Stokes codes using the method of manufactured solutions

International Journal for Numerical Methods in Fluids

Roy, C.J.; Nelson, C.C.; Smith, T.M.; Ober, Curtis C.

The method of manufactured solutions is used to verify the order of accuracy of two finite-volume Euler and Navier-Stokes codes. The Premo code employs a node-centred approach using unstructured meshes, while the Wind code employs a similar scheme on structured meshes. Both codes use Roe's upwind method with MUSCL extrapolation for the convective terms and central differences for the diffusion terms, thus yielding a numerical scheme that is formally second-order accurate. The method of manufactured solutions is employed to generate exact solutions to the governing Euler and Navier-Stokes equations in two dimensions along with additional source terms. These exact solutions are then used to accurately evaluate the discretization error in the numerical solutions. Through global discretization error analyses, the spatial order of accuracy is observed to be second order for both codes, thus giving a high degree of confidence that the two codes are free from coding mistakes in the options exercised. Examples of coding mistakes discovered using the method are also given. © 2004 John Wiley and Sons, Ltd.

More Details

LDRD report : parallel repartitioning for optimal solver performance

Devine, Karen D.; Boman, Erik G.; Devine, Karen D.; Heaphy, Robert T.; Hendrickson, Bruce A.; Heroux, Michael A.

We have developed infrastructure, utilities and partitioning methods to improve data partitioning in linear solvers and preconditioners. Our efforts included incorporation of data repartitioning capabilities from the Zoltan toolkit into the Trilinos solver framework, (allowing dynamic repartitioning of Trilinos matrices); implementation of efficient distributed data directories and unstructured communication utilities in Zoltan and Trilinos; development of a new multi-constraint geometric partitioning algorithm (which can generate one decomposition that is good with respect to multiple criteria); and research into hypergraph partitioning algorithms (which provide up to 56% reduction of communication volume compared to graph partitioning for a number of emerging applications). This report includes descriptions of the infrastructure and algorithms developed, along with results demonstrating the effectiveness of our approaches.

More Details

A filter-based evolutionary algorithm for constrained optimization

Proposed for publication in Evolutionary Computations.

Hart, William E.

We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

More Details

Effect of End-Tethered Polymers on Surface Adhesion of Glassy Polymers

Journal of Polymer Science, Part B: Polymer Physics

Sides, Scott W.; Grest, Gary S.; Stevens, Mark J.; Plimpton, Steven J.

The adhesion between a glassy polymer melt and substrate is studied in the presence of end-grafted chains chemically attached to the substrate surface. Extensive molecular dynamics simulations have been carried out to study the effect of the areal density Σ of tethered chains and tensile pull velocity v on the adhesive failure mechanisms. The initial configurations are generated using a double-bridging algorithm in which new bonds are formed across a pair of monomers equidistant from their respective free ends. This generates new chain configurations that are substantially different than the original two chains such that the systems can be equilibrated in a reasonable amount of cpu time. At the slowest tensile pull velocity studied, a crossover from chain scission to crazing is observed as the coverage increases, while for very large pull velocity, only chain scission is observed. As the coverage increases, the sections of the tethered chains pulled out from the interface form the fibrils of a craze that are strong enough to suppress chain scission, resulting in cohesive rather than adhesive failure. © 2003 Wiley Periodicals, Inc.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Applied Mechanics Reviews

Oberkampf, William L.; Trucano, Timothy G.; Hirsch, Charles

The views of state of art in verification and validation (V & V) in computational physics are discussed. These views are described in the framework in which predictive capability relies on V & V, as well as other factors that affect predictive capability. Some of the research topics addressed are development of improved procedures for the use of the phenomena identification and ranking table (PIRT) for prioritizing V & V activities, and the method of manufactured solutions for code verification. It also addressed development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures.

More Details

Compact optimization can outperform separation: A case study in structural proteomics

4OR

Carr, Robert D.; Lancia, Giuseppe G.

In Combinatorial Optimization, one is frequently faced with linear programming (LP) problems with exponentially many constraints, which can be solved either using separation or what we call compact optimization. The former technique relies on a separation algorithm, which, given a fractional solution, tries to produce a violated valid inequality. Compact optimization relies on describing the feasible region of the LP by a polynomial number of constraints, in a higher dimensional space. A commonly held belief is that compact optimization does not perform as well as separation in practice. In this paper,we report on an application in which compact optimization does in fact largely outperform separation. The problem arises in structural proteomics, and concerns the comparison of 3-dimensional protein folds. Our computational results show that compact optimization achieves an improvement of up to two orders of magnitude over separation. We discuss some reasons why compact optimization works in this case but not, e.g., for the LP relaxation of the TSP. © Springer-Verlag 2004.

More Details

The signature molecular descriptor: 3. Inverse-quantitative structure-activity relationship of ICAM-1 inhibitory peptides

Journal of Molecular Graphics and Modelling

Churchwell, Carla J.; Rintoul, Mark D.; Martin, Shawn; Visco, Donald P.; Kotu, Archana; Larson, Richard S.; Sillerud, Laurel O.; Brown, David C.; Faulon, Jean L.

We present a methodology for solving the inverse-quantitative structure-activity relationship (QSAR) problem using the molecular descriptor called signature. This methodology is detailed in four parts. First, we create a QSAR equation that correlates the occurrence of a signature to the activity values using a stepwise multilinear regression technique. Second, we construct constraint equations, specifically the graphicality and consistency equations, which facilitate the reconstruction of the solution compounds directly from the signatures. Third, we solve the set of constraint equations, which are both linear and Diophantine in nature. Last, we reconstruct and enumerate the solution molecules and calculate their activity values from the QSAR equation. We apply this inverse-QSAR method to a small set of LFA-1/ICAM-1 peptide inhibitors to assist in the search and design of more-potent inhibitory compounds. Many novel inhibitors were predicted, a number of which are predicted to be more potent than the strongest inhibitor in the training set. Two of the more potent inhibitors were synthesized and tested in-vivo, confirming them to be the strongest inhibiting peptides to date. Some of these compounds can be recycled to train a new QSAR and develop a more focused library of lead compounds. © 2003 Elsevier Inc. All rights reserved.

More Details

Simulating economic effects of disruptions in the telecommunications infrastructure

Barton, Dianne C.; Barton, Dianne C.; Eidson, Eric D.; Schoenwald, David A.; Cox, Roger G.; Reinert, Rhonda K.

CommAspen is a new agent-based model for simulating the interdependent effects of market decisions and disruptions in the telecommunications infrastructure on other critical infrastructures in the U.S. economy such as banking and finance, and electric power. CommAspen extends and modifies the capabilities of Aspen-EE, an agent-based model previously developed by Sandia National Laboratories to analyze the interdependencies between the electric power system and other critical infrastructures. CommAspen has been tested on a series of scenarios in which the communications network has been disrupted, due to congestion and outages. Analysis of the scenario results indicates that communications networks simulated by the model behave as their counterparts do in the real world. Results also show that the model could be used to analyze the economic impact of communications congestion and outages.

More Details

Trilinos 3.1 tutorial

Heroux, Michael A.; Sala, Marzio S.; Heroux, Michael A.

This document introduces the use of Trilinos, version 3.1. Trilinos has been written to support, in a rigorous manner, the solver needs of the engineering and scientific applications at Sandia National Laboratories. Aim of this manuscript is to present the basic features of some of the Trilinos packages. The presented material includes the definition of distributed matrices and vectors with Epetra, the iterative solution of linear system with AztecOO, incomplete factorizations with IFPACK, multilevel methods with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. With the help of several examples, some of the most important classes and methods are detailed to the inexperienced user. For the most majority, each example is largely commented throughout the text. Other comments can be found in the source of each example. This document is a companion to the Trilinos User's Guide and Trilinos Development Guides. Also, the documentation included in each of the Trilinos' packages is of fundamental importance.

More Details

Application of multidisciplinary analysis to gene expression

Davidson, George S.; Haaland, David M.; Davidson, George S.

Molecular analysis of cancer, at the genomic level, could lead to individualized patient diagnostics and treatments. The developments to follow will signal a significant paradigm shift in the clinical management of human cancer. Despite our initial hopes, however, it seems that simple analysis of microarray data cannot elucidate clinically significant gene functions and mechanisms. Extracting biological information from microarray data requires a complicated path involving multidisciplinary teams of biomedical researchers, computer scientists, mathematicians, statisticians, and computational linguists. The integration of the diverse outputs of each team is the limiting factor in the progress to discover candidate genes and pathways associated with the molecular biology of cancer. Specifically, one must deal with sets of significant genes identified by each method and extract whatever useful information may be found by comparing these different gene lists. Here we present our experience with such comparisons, and share methods developed in the analysis of an infant leukemia cohort studied on Affymetrix HG-U95A arrays. In particular, spatial gene clustering, hyper-dimensional projections, and computational linguistics were used to compare different gene lists. In spatial gene clustering, different gene lists are grouped together and visualized on a three-dimensional expression map, where genes with similar expressions are co-located. In another approach, projections from gene expression space onto a sphere clarify how groups of genes can jointly have more predictive power than groups of individually selected genes. Finally, online literature is automatically rearranged to present information about genes common to multiple groups, or to contrast the differences between the lists. The combination of these methods has improved our understanding of infant leukemia. While the complicated reality of the biology dashed our initial, optimistic hopes for simple answers from microarrays, we have made progress by combining very different analytic approaches.

More Details

Color Snakes for Dynamic Lighting Conditions on Mobile Manipulation Platforms

IEEE International Conference on Intelligent Robots and Systems

Schaub, Hanspeter; Smith, Christopher E.

Statistical active contour models (aka statistical pressure snakes) have attractive properties for use in mobile manipulation platforms as both a method for use in visual servoing and as a natural component of a human-computer interface. Unfortunately, the constantly changing illumination expected in outdoor environments presents problems for statistical pressure snakes and for their image gradient-based predecessors. This paper introduces a new color-based variant of statistical pressure snakes that gives superior performance under dynamic lighting conditions and improves upon the previously published results of attempts to incorporate color imagery into active deformable models.

More Details

Equilibration of long chain polymer melts in computer simulations

Journal of Chemical Physics

Auhl, Rolf; Everaers, Ralf; Grest, Gary S.; Kremer, Kurt; Plimpton, Steven J.

Equilibrated melts of long chain polymers were prepared. The combination of molecular dynamic (MD) relaxation, double-bridging and slow push-off allowed the efficient and controlled preparation of equilibrated melts of short, medium, and long chains, respectively. Results were obtained for an off-lattice bead-spring model with chain lengths up to N=7000 beads.

More Details

Applications of algebraic multigrid to large-scale finite element analysis of whole bone micro-mechanics on the IBM SP

Proceedings of the 2003 ACM/IEEE Conference on Supercomputing, SC 2003

Adams, Mark F.; Bayraktar, Harun H.; Keaveny, Tony M.; Papdopoulos, Panayiotis

Accurate micro-finite element analyses of whole bones require the solution of large sets of algebraic equations. Multigrid has proven to be an effective approach to the design of highly scalable linear solvers for solid mechanics problems. We present some of the first applications of scalable linear solvers, on massively parallel computers, to whole vertebral body structural analysis. We analyze the performance of our algebraic multigrid (AMG) methods on problems with over 237 million degrees of freedom on IBM SP parallel computers. We demonstrate excellent parallel scalability, both in the algorithms and the implementations, and analyze the nodal performance of the important AMG kernels on the IBM Power3 and Power4 architectures. © 2003 ACM.

More Details

Final report for the endowment of simulator agents with human-like episodic memory LDRD

Forsythe, James C.; Forsythe, James C.; Speed, Ann S.; Lippitt, Carl E.; Schaller, Mark J.; Xavier, Patrick G.; Thomas, Edward V.; Schoenwald, David A.

This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior.

More Details

Epetra developers coding guidelines

Heroux, Michael A.; Heroux, Michael A.

Epetra is a package of classes for the construction and use of serial and distributed parallel linear algebra objects. It is one of the base packages in Trilinos. This document describes guidelines for Epetra coding style. The issues discussed here go beyond correct C++ syntax to address issues that make code more readable and self-consistent. The guidelines presented here are intended to aid current and future development of Epetra specifically. They reflect design decisions that were made in the early development stages of Epetra. Some of the guidelines are contrary to more commonly used conventions, but we choose to continue these practices for the purposes of self-consistency. These guidelines are intended to be complimentary to policies established in the Trilinos Developers Guide.

More Details
Results 9726–9750 of 9,998
Results 9726–9750 of 9,998