Publications

Results 9576–9600 of 9,998
Skip to search filters

Validation and assessment of integer programming sensor placement models

Berry, Jonathan W.; Hart, William E.; Phillips, Cynthia A.; Watson, Jean-Paul W.

We consider the accuracy of predictions made by integer programming (IP) models of sensor placement for water security applications. We have recently shown that IP models can be used to find optimal sensor placements for a variety of different performance criteria (e.g. minimize health impacts and minimize time to detection). However, these models make a variety of simplifying assumptions that might bias the final solution. We show that our IP modeling assumptions are similar to models developed for other sensor placement methodologies, and thus IP models should give similar predictions. However, this discussion highlights that there are significant differences in how temporal effects are modeled for sensor placement. We describe how these modeling assumptions can impact sensor placements.

More Details

Water quality sensor placement in water networks with budget constraints

Berry, Jonathan W.; Hart, William E.; Phillips, Cynthia A.

In recent years, several integer programming models have been proposed to place sensors in municipal water networks in order to detect intentional or accidental contamination. Although these initial models assumed that it is equally costly to place a sensor at any place in the network, there clearly are practical cost constraints that would impact a sensor placement decision. Such constraints include not only labor costs but also the general accessibility of a sensor placement location. In this paper, we extend our integer program to explicitly model the cost of sensor placement. We partition network locations into groups of varying placement cost, and we consider the public health impacts of contamination events under varying budget constraints. Thus our models permit cost/benefit analyses for differing sensor placement designs. As a control for our optimization experiments, we compare the set of sensor locations selected by the optimization models to a set of manually-selected sensor locations.

More Details

A scalable distributed parallel breadth-first search algorithm on BlueGene/L

Proceedings of the ACM/IEEE 2005 Supercomputing Conference, SC'05

Yoo, Andy; Chow, Edmond; Henderson, Keith; McLendon, William; Hendrickson, Bruce A.; Çatalyürek, Ümit

Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth-first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional ID (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and ID partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported. © 2005 IEEE.

More Details

Supervised machine learning for modeling human recognition of vehicle-driving situations

2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS

Dixon, Kevin R.; Lippitt, Carl E.; Forsythe, James C.

A classification system is developed to identify driving situations from labeled examples of previous occurrences. The purpose of the classifier is to provide physical context to a separate system that mitigates unnecessary distractions, allowing the driver to maintain focus during periods of high difficulty. While watching videos of driving, we asked different users to indicate their perceptions of the current situation. We then trained a classifier to emulate the human recognition of driving situations using the Sandia Cognitive Framework. In unstructured conditions, such as driving in urban areas and the German autobahn, the classifier was able to correctly predict human perceptions of driving situations over 95% of the time. This paper focuses on the learning algorithms used to train the driving-situation classifier. Future work will reduce the human efforts needed to train the system. © 2005 IEEE.

More Details

Model-building codes for membrane proteins

Brown, William M.; Faulon, Jean-Loup M.; Gray, Genetha A.; Hunt, Thomas W.; Schoeniger, Joseph S.; Slepoy, Alexander S.; Young, Malin M.

We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

More Details

Robust algebraic preconditioners using IFPACK 3.0

Sala, Marzio S.; Heroux, Michael A.

IFPACK provides a suite of object-oriented algebraic preconditioners for the solution of preconditioned iterative solvers. IFPACK constructors expect the (distributed) real sparse matrix to be an Epetra RowMatrix object. IFPACK can be used to define point and block relaxation preconditioners, various flavors of incomplete factorizations for symmetric and non-symmetric matrices, and one-level additive Schwarz preconditioners with variable overlap. Exact LU factorizations of the local submatrix can be accessed through the AMESOS packages. IFPACK , as part of the Trilinos Solver Project, interacts well with other Trilinos packages. In particular, IFPACK objects can be used as preconditioners for AZTECOO, and as smoothers for ML. IFPACK is mainly written in C++, but only a limited subset of C++ features is used, in order to enhance portability.

More Details

ALEGRA : version 4.6

Wong, Michael K.; Brunner, Thomas A.; Garasi, Christopher J.; Haill, Thomas A.; Mehlhorn, Thomas A.; Drake, Richard R.; Hensinger, David M.; Robbins, Joshua R.; Robinson, Allen C.; Summers, Randall M.; Voth, Thomas E.

ALEGRA is an arbitrary Lagrangian-Eulerian multi-material finite element code used for modeling solid dynamics problems involving large distortion and shock propagation. This document describes the basic user input language and instructions for using the software.

More Details

Solving elliptic finite element systems in near-linear time with support preconditioners

Proposed for publication in the SIAM Journal on Matrix Analysis.

Boman, Erik G.; Hendrickson, Bruce A.

We consider linear systems arising from the use of the finite element method for solving a certain class of linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and in particular it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the quality of our approximation, which controls the number of iterations in the preconditioned iterative solver, depends primarily on a mesh quality measure but not on the problem size or shape of the domain.

More Details

Sensitivity technologies for large scale simulation

Bartlett, Roscoe B.; Collis, Samuel S.; Keiter, Eric R.; Ober, Curtis C.

Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discrete adjoints through a simple example.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0

Boucheron, Edward A.; Schofield, Joseph R.; Drake, Richard R.; Edwards, Harold C.; Minana, Molly A.; Forsythe, Christi A.; Heaphy, Robert T.; Hodges, Ann L.; Pavlakos, Constantine P.; Sturtevant, Judy E.

The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

More Details

Graduated embodiment for sophisticated agent evolution and optimization

Boslough, Mark B.; Peters, Michael D.; Pierson, Arthurine R.

We summarize the results of a project to develop evolutionary computing methods for the design of behaviors of embodied agents in the form of autonomous vehicles. We conceived and implemented a strategy called graduated embodiment. This method allows high-level behavior algorithms to be developed using genetic programming methods in a low-fidelity, disembodied modeling environment for migration to high-fidelity, complex embodied applications. This project applies our methods to the problem domain of robot navigation using adaptive waypoints, which allow navigation behaviors to be ported among autonomous mobile robots with different degrees of embodiment, using incremental adaptation and staged optimization. Our approach to biomimetic behavior engineering is a hybrid of human design and artificial evolution, with the application of evolutionary computing in stages to preserve building blocks and limit search space. The methods and tools developed for this project are directly applicable to other agent-based modeling needs, including climate-related conflict analysis, multiplayer training methods, and market-based hypothesis evaluation.

More Details

Considering the relative importance of network performance and network features

Lawry, William L.; Underwood, Keith

Latency and bandwidth are usually considered to be the dominant factor in parallel application performance; however, recent studies have indicated that support for independent progress in MPI can also have a significant impact on application performance. This paper leverages the Cplant system at Sandia National Labs to compare a faster, vendor provided MPI library without independent progress to an internally developed MPI library that sacrifices some performance to provide independent progress. The results are surprising. Although some applications see significant negative impacts from the reduced network performance, others are more sensitive to the presence of independent progress.

More Details

Genomes to life project quarterly report June 2004

Heffelfinger, Grant S.; Martino, Anthony M.; Rintoul, Mark D.

This SAND report provides the technical progress through June 2004 of the Sandia-led project, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling'', funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO{sub 2} are important terms in the global environmental response to anthropogenic atmospheric inputs of CO{sub 2} and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort.

More Details

A manufactured solution for verifying CFD boundary conditions: part II

Knupp, Patrick K.; Ober, Curtis C.

Order-of-accuracy verification is necessary to ensure that software correctly solves a given set of equations. One method to verify the order of accuracy of a code is the method of manufactured solutions. In this study, a manufactured solution has been derived and implemented that allows verification of not only the Euler, Navier-Stokes, and Reynolds-Averaged Navier-Stokes (RANS) equation sets, but also some of their associated boundary conditions (BC's): slip, no-slip (adiabatic and isothermal), and outflow (subsonic, supersonic, and mixed). Order-of-accuracy verification has been performed for the Euler and Navier-Stokes equations and these BC's in a compressible computational fluid dynamics code. All of the results shown are on skewed, non-uniform meshes. RANS results will be presented in a future paper. The observed order of accuracy was lower than the expected order of accuracy in two cases. One of these cases resulted in the identification and correction of a coding mistake in the CHAD gradient correction that was reducing the observed order of accuracy. This mistake would have been undetectable on a Cartesian mesh. During the search for the CHAD gradient correction problem, an unrelated coding mistake was found and corrected. The other case in which the observed order of accuracy was less than expected was a test of the slip BC; although no specific coding or formulation mistakes have yet been identified. After the correction of the identified coding mistakes, all of the aforementioned equation sets and BC's demonstrated the expected (or at least acceptable) order of accuracy except the slip condition.

More Details

Multilevel methods for eigenspace computations in structural dynamics

Lehoucq, Richard B.; Hetmaniuk, Ulrich L.; Hetmaniuk, Ulrich L.

Modal analysis of three-dimensional structures frequently involves finite element discretizations with millions of unknowns and requires computing hundreds or thousands of eigenpairs. In this presentation we review methods based on domain decomposition for such eigenspace computations in structural dynamics. We distinguish approaches that solve the eigenproblem algebraically (with minimal connections to the underlying partial differential equation) from approaches that tightly couple the eigensolver with the partial differential equation.

More Details

A scalable parallel graph coloring algorithm for distributed memory computers

Lecture Notes in Computer Science

Boman, Erik G.; Bozdaǧ, Doruk; Catalyurek, Umit; Gebremedhin, Assefaw H.; Manne, Fredrik

In large-scale parallel applications a graph coloring is often carried out to schedule computational tasks. In this paper, we describe a new distributed-memory algorithm for doing the coloring itself in parallel. The algorithm operates in an iterative fashion; in each round vertices are speculatively colored based on limited information, and then a set of incorrectly colored vertices, to be recolored in the next round, is identified. Parallel speedup is achieved in part by reducing the frequency of communication among processors. Experimental results on a PC cluster using up to 16 processors show that the algorithm is scalable. © Springer-Verlag Berlin Heidelberg 2005.

More Details
Results 9576–9600 of 9,998
Results 9576–9600 of 9,998