Publications

Results 9376–9400 of 9,998
Skip to search filters

The surfpack software library for surrogate modeling of sparse irregularly spaced multidimensional data

Collection of Technical Papers - 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference

Giunta, Anthony A.; Swiler, Laura P.; Brown, Shannon L.; Eldred, Michael S.; Richards, Mark D.; Cyr, Eric C.

Surfpack is a general-purpose software library of multidimensional function approximation methods for applications such as data visualization, data mining, sensitivity analysis, uncertainty quantification, and numerical optimization. Surfpack is primarily intended for use on sparse, irregularly-spaced, n-dimensional data sets where classical function approximation methods are not applicable. Surfpack is under development at Sandia National Laboratories, with a public release of Surfpack version 1.0 in August 2006. This paper provides an overview of Surfpack's function approximation methods along with some of its software design attributes. In addition, this paper provides some simple examples to illustrate the utility of Surfpack for data trend analysis, data visualization, and optimization. Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

Measuring MPI send and receive overhead and application availability in high performance network interfaces

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Doerfler, Douglas W.; Brightwell, Ronald B.

In evaluating new high-speed network interfaces, the usual metrics of latency and bandwidth are commonly measured and reported. There are numerous other message passing characteristics that can have a dramatic effect on application performance that should be analyzed when evaluating a new interconnect. One such metric is overhead, which dictates the networks ability to allow the application to perform non-message passing work while a transfer is taking place. A method for measuring overhead, and hence calculating application availability, is presented. Results for several next-generation network interfaces are also presented. © Springer-Verlag Berlin Heidelberg 2006.

More Details

Semi-infinite target penetration by ogive-nose penetrators: ALEGRA/SHISM code predictions for ideal and non-ideal impacts

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Bishop, Joseph E.; Voth, Thomas E.; Brown, Kevin H.

The physics of ballistic penetration mechanics is of great interest in penetrator and counter-measure design. The phenomenology associated with these events can be quite complex and a significant number of studies have been conducted ranging from purely experimental to 'engineering' models based on empirical and/or analytical descriptions to fully-coupled penetrator/target, thermo-mechanical numerical simulations. Until recently, however, there appears to be a paucity of numerical studies considering 'non-ideal' impacts [1]. The goal of this work is to demonstrate the SHISM algorithm implemented in the ALEGRA Multi-Material ALE (Arbitrary Lagrangian Eulerian) code [13]. The SHISM algorithm models the three-dimensional continuum solid mechanics response of the target and penetrator in a fully coupled manner. This capability allows for the study of 'non-ideal' impacts (e.g. pitch, yaw and/or obliquity of the target/penetrator pair). In this work predictions using the SHISM algorithm are compared to previously published experimental results for selected ideal and non-ideal impacts of metal penetrator-target pairs. These results show good agreement between predicted and measured maximum depth-of-penetration, DOP, for ogive-nose penetrators with striking velocities in the 0.5 to 1.5 km/s range. Ideal impact simulations demonstrate convergence in predicted DOP for the velocity range considered. A theory is advanced to explain disagreement between predicted and measured DOP at higher striking velocities. This theory postulates uncertainties in angle-of-attack for the observed discrepancies. It is noted that material models and associated parameters used here, were unmodified from those in the literature. Hence, no tuning of models was performed to match experimental data. Copyright © 2005 by ASME.

More Details

Kevlar and Carbon Composite body armor - Analysis and testing

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Uekert, Vanessa S.; Stofleth, Jerome H.; Preece, Dale S.; Risenmay, Matthew A.

Kevlar materials make excellent body armor due to their fabric-like flexibility and ultra-high tensile strength. Carbon composites are made up from many layers of carbon AS-4 material impregnated with epoxy. Fiber orientation is bidirectional, orientated at 0° and 90°. They also have ultra-high tensile strength but can be made into relatively hard armor pieces. Once many layers are cut and assembled they can be ergonomicically shaped in a mold during the heated curing process. Kevlar and carbon composites can be used together to produce light and effective body armor. This paper will focus on computer analysis and laboratory testing of a Kevlar/carbon composite cross-section proposed for body armor development. The carbon composite is inserted between layers of Kevlar. The computer analysis was performed with a Lagrangian transversely Isotropic material model for both the Kevlar and Carbon Composite. The computer code employed is AUTODYN. Both the computer analysis and laboratory testing utilized different fragments sizes of hardened steel impacting on the armor cross-section. The steel fragments are right-circular cylinders. Laboratory testing was undertaken by firing various sizes of hardened steel fragments at square test coupons of Kevlar layers and heat cured carbon composites. The V50 velocity for the various fragment sizes was determined from the testing. This V50 data can be used to compare the body armor design with other previously designed armor systems. AUTODYN [1] computer simulations of the fragment impacts were compared to the experimental results and used to evaluate and guide the overall design process. This paper will include the detailed transversely isotropic computer simulations of the Kevlar/carbon composite cross-section as well as the experimental results and a comparison between the two. Conclusions will be drawn about the design process and the validity of current computer modeling methods for Kevlar and carbon composites. Copyright © 2005 by ASME.

More Details

An analysis of the double-precision floating-point FFT on FPGAs

Proceedings - 13th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, FCCM 2005

Hemmert, Karl S.; Underwood, Keith

Advances in FPGA technology have led to dramatic improvements in double precision floating-point performance. Modern FPGAs boast several GigaFLOPs of raw computing power. Unfortunately, this computing power is distributed across 30 floating-point units with over 10 cycles of latency each. The user must find two orders of magnitude more parallelism than is typically exploited in a single microprocessor; thus, it is not clear that the computational power of FPGAs can be exploited across a wide range of algorithms. This paper explores three implementation alternatives for the Fast Fourier Transform (FFT) on FPGAs. The algorithms are compared in terms of sustained performance and memory requirements for various FFT sizes and FPGA sizes. The results indicate that FPGAs are competitive with microprocessors in terms of performance and that the "correct" FFT implementation varies based on the size of the transform and the size of the FPGA. © 2005 IEEE.

More Details

Perspectives on optimization under uncertainty: Algorithms and applications

Giunta, Anthony A.; Eldred, Michael S.; Swiler, Laura P.; Trucano, Timothy G.

This paper provides an overview of several approaches to formulating and solving optimization under uncertainty (OUU) engineering design problems. In addition, the topic of high-performance computing and OUU is addressed, with a discussion of the coarse- and fine-grained parallel computing opportunities in the various OUU problem formulations. The OUU approaches covered here are: sampling-based OUU, surrogate model-based OUU, analytic reliability-based OUU (also known as reliability-based design optimization), polynomial chaos-based OUU, and stochastic perturbation-based OUU.

More Details

Reverse engineering biological networks :applications in immune responses to bio-toxins

Faulon, Jean-Loup M.; Zhang, Zhaoduo Z.; Martino, Anthony M.; Timlin, Jerilyn A.; Haaland, David M.; Davidson, George S.; May, Elebeoba E.; Slepoy, Alexander S.

Our aim is to determine the network of events, or the regulatory network, that defines an immune response to a bio-toxin. As a model system, we are studying T cell regulatory network triggered through tyrosine kinase receptor activation using a combination of pathway stimulation and time-series microarray experiments. Our approach is composed of five steps (1) microarray experiments and data error analysis, (2) data clustering, (3) data smoothing and discretization, (4) network reverse engineering, and (5) network dynamics analysis and fingerprint identification. The technological outcome of this study is a suite of experimental protocols and computational tools that reverse engineer regulatory networks provided gene expression data. The practical biological outcome of this work is an immune response fingerprint in terms of gene expression levels. Inferring regulatory networks from microarray data is a new field of investigation that is no more than five years old. To the best of our knowledge, this work is the first attempt that integrates experiments, error analyses, data clustering, inference, and network analysis to solve a practical problem. Our systematic approach of counting, enumeration, and sampling networks matching experimental data is new to the field of network reverse engineering. The resulting mathematical analyses and computational tools lead to new results on their own and should be useful to others who analyze and infer networks.

More Details

A comparison of Navier Stokes and network models to predict chemical transport in municipal water distribution systems

World Water Congress 2005: Impacts of Global Climate Change - Proceedings of the 2005 World Water and Environmental Resources Congress

Van Bloemen Waanders, B.; Hammond, G.; Shadid, John N.; Collis, S.; Murray, R.

We investigate the accuracy of chemical transport in network models for small geometric configurations. Network model have successfully simulated the general operations of large water distribution systems. However, some of the simplifying assumptions associated with the implementation may cause inaccuracies if chemicals need to be carefully characterized at a high level of detail. In particular, we are interested in precise transport behavior so that inversion and control problems can be applied to water distribution networks. As an initial phase, Navier Stokes combined with a convection-diffusion formulation was used to characterize the mixing behavior at a pipe intersection in two dimensions. Our numerical models predict only on the order of 12-14 % of the chemical to be mixed with the other inlet pipe. Laboratory results show similar behavior and suggest that even if our numerical model is able to resolve turbulence, it may not improve the mixing behavior. This conclusion may not be appropriate however for other sets of operating conditions, and therefore we have started to develop a 3D implementation. Preliminary results for duct geometry are presented. © copyright ASCE 2005.

More Details

Enhancing NIC performance for MPI using processing-in-memory

Proceedings - 19th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2005

Rodrigues, Arun; Murphy, Richard; Brightwell, Ronald B.; Underwood, Keith D.

Processing-in-Memory (PIM) technology encompasses a range of research leveraging a tight coupling of memory and processing. The most unique features of the technology are extremely wide paths to memory, extremely low memory latency, and wide functional units. Many PIM researchers are also exploring extremely fine-grained multi-threading capabilities. This paper explores a mechanism for leveraging these features of PIM technology to enhance commodity architectures in a seemingly mundane way: accelerating MPI. Modern network interfaces leverage simple processors to offload portions of the MPI semantics, particularly the management of posted receive and unexpected message queues. Without adding cost or increasing clock frequency, using PIMs in the network interface can enhance performance. The results are a significant decrease in latency and increase in small message bandwidth, particularly when long queues are present.

More Details

Molecular simulations of beta-amyloid protein near hydrated lipids (PECASE)

Thompson, Aidan P.

We performed molecular dynamics simulations of beta-amyloid (A{beta}) protein and A{beta} fragment(31-42) in bulk water and near hydrated lipids to study the mechanism of neurotoxicity associated with the aggregation of the protein. We constructed full atomistic models using Cerius2 and ran simulations using LAMMPS. MD simulations with different conformations and positions of the protein fragment were performed. Thermodynamic properties were compared with previous literature and the results were analyzed. Longer simulations and data analyses based on the free energy profiles along the distance between the protein and the interface are ongoing.

More Details

Computational stability study of 3D flow in a differentially heated 8:1:1 cavity

3rd M.I.T. Conference on Computational Fluid and Solid Mechanics

Salinger, Andrew G.

The critical Rayleigh number Racr of the Hopf bifurcation that signals the limit of steady flows in a differentially heated 8:1:1 cavity is computed. The two-dimensional analog of this problem was the subject of a comprehensive set of benchmark calculations that included the estimation of Racr [1]. In this work we begin to answer the question of whether the 2D results carry over into 3D models. For the case of the 2D model being extruded for a depth of 1, and no-slip/no-penetration and adiabatic boundary conditions placed at these walls, the steady flow and destabilizing eigenvectors qualitatively match those from the 2D model. A mesh resolution study extending to a 20-million unknown model shows that the presence of these walls delays the first critical Rayleigh number from 3.06 × 105 to 5.13 × 105. © 2005 Elsevier Ltd.

More Details

A comparison of floating point and logarithmic number systems for FPGAs

Proceedings - 13th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, FCCM 2005

Haselman, Michael; Beauchamp, Michael; Wood, Aaron; Hauck, Scott; Underwood, Keith; Hemmert, Karl S.

There have been many papers proposing the use of logarithmic numbers (LNS) as an alternative to floating point because of simpler multiplication, division and exponentiation computations [1,4-9,13]. However, this advantage comes at the cost of complicated, inexact addition and subtraction, as well as the need to convert between the formats. In this work, we created a parameterized LNS library of computational units and compared them to an existing floating point library. Specifically, we considered multiplication, division, addition, subtraction, and format conversion to determine when one format should be used over the other and when it is advantageous to change formats during a calculation. © 2005 IEEE.

More Details
Results 9376–9400 of 9,998
Results 9376–9400 of 9,998