Publications

Results 9826–9850 of 9,998
Skip to search filters

Statistical Validation of Engineering and Scientific Models: Validation Experiments to Application

Trucano, Timothy G.

Several major issues associated with model validation are addressed here. First, we extend the application-based, model validation metric presented in Hills and Trucano (2001) to the Maximum Likelihood approach introduced in Hills and Trucano (2002). This method allows us to use the target application of the code to weigh the measurements made from a validation experiment so that those measurements that are most important for the application are more heavily weighted. Secondly, we further develop the linkage between suites of validation experiments and the target application so that we can (1) provide some measure of coverage of the target application and, (2) evaluate the effect of uncertainty in the measurements and model parameters on application level validation. We provide several examples of this approach based on steady and transient heat conduction, and shock physics applications.

More Details

Growth and morphology of cadmium chalcogenides : the synthesis of nanorods, tetrapods, and spheres from CdO and Cd(O[2]CCH[3])[2]

Proposed for publication in the Journal of Chemistry and Materials.

Bunge, Scott D.; Bunge, Scott D.; Boyle, Timothy J.; Rodriguez, Marko A.; Headley, Thomas J.

In this work, we investigated the controlled growth of nanocrystalline CdE (E = S, Se, and Te) via the pyrolysis of CdO and Cd(O2CCH3)2 precursors, at the specific Cd to E mole ratio of 0.67 to 1. The experimental results reveal that while the growth of CdS produces only a spherical morphology, CdSe and CdTe exhibit rod-like and tetrapod-like morphologies of temporally controllable aspect ratios. Over a 7200 s time period, CdS spheres grew from 4 nm (15 s aliquot) to 5 nm, CdSe nanorods grew from dimensions of 10.8 x 3.6 nm (15 s aliquot) to 25.7 x 11.2 nm, and CdTe tetrapods with arms 15 x 3.5 nm (15 s aliquot) grew into a polydisperse mixture of spheres, rods, and tetrapods on the order of 20 to 80 nm. Interestingly, long tracks of self-assembled CdSe nanorods (3.5 x 24 nm) of over one micron in length were observed. The temporal growth for each nanocrystalline material was monitored by UV-VIS spectroscopy, transmission electron spectroscopy, and further characterized by powder X-ray diffraction. This study has elucidated the vastly different morphologies available for CdS, CdSe, and CdTe during the first 7200 s after injection of the desired chalcogenide.

More Details

Discrete sensor placement problems in distribution networks

Hart, William E.; Hart, William E.

We consider the problem of placing sensors in a network to detect and identify the source of any contamination. We consider two variants of this problem: (1) sensor-constrained: we are allowed a fixed number of sensors and want to minimize contamination detection time; and (2) time-constrained: we must detect contamination within a given time limit and want to minimize the number of sensors required. Our main results are as follows. First, we give a necessary and sufficient condition for source identification. Second, we show that the sensor and time constrained versions of the problem are polynomially equivalent. Finally, we show that the sensor-constrained version of the problem is polynomially equivalent to the asymmetric k-center problem and that the time-constrained version of the problem is polynomially equivalent to the dominating set problem.

More Details

Sensor placement in municipal water networks

Hart, William E.; Hart, William E.; Phillips, Cynthia A.

We present a model for optimizing the placement of sensors in municipal water networks to detect maliciously-injected contaminants. An optimal sensor configuration minimizes the expected fraction of the population at risk. We formulate this problem as an integer program, which can be solved with generally available IP solvers. We find optimal sensor placements for three real networks with synthetic risk and population data. Our experiments illustrate that this formulation can be solved relatively quickly, and that the predicted sensor configuration is relatively insensitive to uncertainties in the data used for prediction.

More Details

An introduction to the COLIN optimization interface

Hart, William E.; Hart, William E.

We describe COLIN, a Common Optimization Library INterface for C++. COLIN provides C++ template classes that define a generic interface for both optimization problems and optimization solvers. COLIN is specifically designed to facilitate the development of hybrid optimizers, for which one optimizer calls another to solve an optimization subproblem. We illustrate the capabilities of COLIN with an example of a memetic genetic programming solver.

More Details

Design, implementation, and performance of MPI on Portals 3.0

International Journal of High Performance Computing Applications

Brightwell, Ronald B.; Riesen, Rolf; Maccabe, Arthur B.

This paper describes an implementation of the Message Passing Interface (MPI) on the Portals 3.0 data movement layer. Portals 3.0 provides low-level building blocks that are flexible enough to support higher-level message passing layers, such as MPI, very efficiently. Portals 3.0 is also designed to allow for programmable network interface cards to offload message processing from the host processor, allowing for the ability to overlap computation and MPI communication. We describe the basic building blocks in Portals 3.0, show how they can be put together to implement MPI, and describe the protocols of our MPI implementation. We look at several key operations within the implementation and describe the effects that a Portals 3.0 implementation has on scalability and performance. We also present preliminary performance results from our implementation for Myrinet.

More Details

Engineering a transformation of human-machine interaction to an augmented cognitive relationship

Forsythe, James C.; Forsythe, James C.; Bernard, Michael L.; Xavier, Patrick G.; Abbott, Robert G.; Speed, Ann S.; Brannon, Nathan B.

This project is being conducted by Sandia National Laboratories in support of the DARPA Augmented Cognition program. Work commenced in April of 2002. The objective for the DARPA program is to 'extend, by an order of magnitude or more, the information management capacity of the human-computer warfighter.' Initially, emphasis has been placed on detection of an operator's cognitive state so that systems may adapt accordingly (e.g., adjust information throughput to the operator in response to workload). Work conducted by Sandia focuses on development of technologies to infer an operator's ongoing cognitive processes, with specific emphasis on detecting discrepancies between machine state and an operator's ongoing interpretation of events.

More Details

Carbon sequestration in Synechococcus Sp.: from molecular machines to hierarchical modeling

Proposed for publication in OMICS: A Journal of Integrative Biology, Vol. 6, No.4, 2002.

Heffelfinger, Grant S.; Faulon, Jean-Loup M.; Frink, Laura J.; Haaland, David M.; Hart, William E.; Lane, Todd L.; Heffelfinger, Grant S.; Plimpton, Steven J.; Roe, Diana C.; Timlin, Jerilyn A.; Martino, Anthony M.; Rintoul, Mark D.; Davidson, George S.

The U.S. Department of Energy recently announced the first five grants for the Genomes to Life (GTL) Program. The goal of this program is to ''achieve the most far-reaching of all biological goals: a fundamental, comprehensive, and systematic understanding of life.'' While more information about the program can be found at the GTL website (www.doegenomestolife.org), this paper provides an overview of one of the five GTL projects funded, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling.'' This project is a combined experimental and computational effort emphasizing developing, prototyping, and applying new computational tools and methods to elucidate the biochemical mechanisms of the carbon sequestration of Synechococcus Sp., an abundant marine cyanobacteria known to play an important role in the global carbon cycle. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO(2) are important terms in the global environmental response to anthropogenic atmospheric inputs of CO(2) and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. The project includes five subprojects: an experimental investigation, three computational biology efforts, and a fifth which deals with addressing computational infrastructure challenges of relevance to this project and the Genomes to Life program as a whole. Our experimental effort is designed to provide biology and data to drive the computational efforts and includes significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Our computational efforts include coupling molecular simulation methods with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes and developing a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. Furthermore, given that the ultimate goal of this effort is to develop a systems-level of understanding of how the Synechococcus genome affects carbon fixation at the global scale, we will develop and apply a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, because the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats, we have also established a companion computational infrastructure to support this effort as well as the Genomes to Life program as a whole.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Bunge, Scott D.; Bunge, Scott D.; Boyle, Timothy J.; Headley, Thomas J.; Kotula, Paul G.; Rodriguez, Marko A.

Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

More Details

Molecular Dynamics Simulation of Polymer Dissolution

Thompson, Aidan P.; Thompson, Aidan P.

In the LIGA process for manufacturing microcomponents, a polymer film is exposed to an x-ray beam passed through a gold pattern. This is followed by the development stage, in which a selective solvent is used to remove the exposed polymer, reproducing the gold pattern in the polymer film. Development is essentially polymer dissolution, a physical process which is not well understood. We have used coarse-grained molecular dynamics simulation to study the early stage of polymer dissolution. In each simulation a film of non-glassy polymer was brought into contact with a layer of solvent. The mutual penetration of the two phases was tracked as a function of time. Several film thicknesses and two different chain lengths were simulated. In all cases, the penetration process conformed to ideal Fickian diffusion. We did not see the formation of a gel layer or other non-ideal effects. Variations in the Fickian diffusivities appeared to be caused primarily by differences in the bulk polymer film density.

More Details

Evaluation of an eager protocol optimization for MPI

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Brightwell, Ronald B.; Underwood, Keith

Nearly all implementations of the Message Passing Interface (MPI) employ a two-level protocol for point-to-point messages. Short messages are sent eagerly to optimize for latency, and long messages are typically implemented using a rendezvous mechanism. In a rendezvous implementation, the sender must first send a request and receive an acknowledgment before the data can be transferred. While there are several possible reasons for using this strategy for long messages, most implementations are forced to use a rendezvous strategy due to operating system and/or network limitations. In this paper, we compare an implementation that uses a rendezvous protocol for long messages with an implementation that adds an eager optimization for long messages. We discuss implementation issues and provide a performance comparison for several micro-benchmarks. We also present a new micro-benchmark that may provide better insight into how these different protocols effect application performance. Results for this new benchmark indicate that, for larger messages, a significant number of receives must be pre-posted in order for an eager protocol optimization to out-perform a rendezvous protocol. © Springer-Verlag Berlin Heidelberg 2003.

More Details

An MPI tool to measure application sensitivity to variation in communication parameters

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

León, Edgar A.; Maccabe, Arthur B.; Brightwell, Ronald B.

This work describes an apparatus which can be used to vary communication performance parameters for MPI applications, and provides a tool to analyze the impact of communication performance on parallel applications. Our tool is based on Myrinet (along with GM). We use an extension of the LogP model to allow greater flexibility in determining the parameter(s) to which parallel applications may be sensitive. We show that individual communication parameters can be independently controlled within a small percentage error. We also present the results of using our tool on a suite of parallel benchmarks. © Springer-Verlag Berlin Heidelberg 2003.

More Details

Solidification Diagnostics for Joining and Microstructural Simulations

Robino, Charles V.; Robino, Charles V.; Hall, Aaron C.; Headley, Thomas J.; Roach, R.A.

Solidification is an important aspect of welding, brazing, soldering, LENS fabrication, and casting. The current trend toward utilizing large-scale process simulations and materials response models for simulation-based engineering is driving the development of new modeling techniques. However, the effective utilization of these models is, in many cases, limited by a lack of fundamental understanding of the physical processes and interactions involved. In addition, experimental validation of model predictions is required. We have developed new and expanded experimental techniques, particularly those needed for in-situ measurement of the morphological and kinetic features of the solidification process. The new high-speed, high-resolution video techniques and data extraction methods developed in this work have been used to identify several unexpected features of the solidification process, including the observation that the solidification front is often far more dynamic than previously thought. In order to demonstrate the utility of the video techniques, correlations have been made between the in-situ observations and the final solidification microstructure. Experimental methods for determination of the solidification velocity in highly dynamic pulsed laser welds have been developed, implemented, and used to validate and refine laser welding models. Using post solidification metallographic techniques, we have discovered a previously unreported orientation relationship between ferrite and austenite in the Fe-Cr-Ni alloy system, and have characterized the conditions under which this new relationship develops. Taken together, the work has expanded both our understanding of, and our ability to characterize, solidification phenomena in complex alloy systems and processes.

More Details

Computational Algorithms for Device-Circuit Coupling

Keiter, Eric R.; Keiter, Eric R.; Hutchinson, Scott A.; Hoekstra, Robert J.; Rankin, Eric R.; Russo, Thomas V.; Waters, Lon J.

Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

More Details
Results 9826–9850 of 9,998
Results 9826–9850 of 9,998