Publications

Results 9526–9550 of 9,998
Skip to search filters

A multiscale discontinuous Galerkin method

Scovazzi, Guglielmo S.

We propose a new class of Discontinuous Galerkin (DG) methods based on variational multiscale ideas. Our approach begins with an additive decomposition of the discontinuous finite element space into continuous (coarse) and discontinuous (fine) components. Variational multiscale analysis is used to define an interscale transfer operator that associates coarse and fine scale functions. Composition of this operator with a donor DG method yields a new formulation that combines the advantages of DG methods with the attractive and more efficient computational structure of a continuous Galerkin method. The new class of DG methods is illustrated for a scalar advection-diffusion problem.

More Details

Uniform accuracy of eigenpairs from a shift-invert Lanczos method

Proposed for publication in the SIAM Journal on Matrix Analysis and Applications Special Issue on Accurate Solution of Eigenvalue P.

Hetmaniuk, Ulrich L.; Lehoucq, Richard B.

This paper analyzes the accuracy of the shift-invert Lanczos iteration for computing eigenpairs of the symmetric definite generalized eigenvalue problem. We provide bounds for the accuracy of the eigenpairs produced by shift-invert Lanczos given a residual reduction. We discuss the implications of our analysis for practical shift-invert Lanczos iterations. When the generalized eigenvalue problem arises from a conforming finite element method, we also comment on the uniform accuracy of bounds (independent of the mesh size h).

More Details

The implications of working set analysis on supercomputing memory hierarchy design

Underwood, Keith; Rodrigues, Arun

Supercomputer architects strive to maximize the performance of scientific applications. Unfortunately, the large, unwieldy nature of most scientific applications has lead to the creation of artificial benchmarks, such as SPEC-FP, for architecture research. Given the impact that these benchmarks have on architecture research, this paper seeks an understanding of how they relate to real-world applications within the Department of Energy. Since the memory system has been found to be a particularly key issue for many applications, the focus of the paper is on the relationship between how the SPEC-FP benchmarks and DOE applications use the memory system. The results indicate that while the SPEC-FP suite is a well balanced suite, supercomputing applications typically demand more from the memory system and must perform more 'other work' (in the form of integer computations) along with the floating point operations. The SPEC-FP suite generally demonstrates slightly more temporal locality leading to somewhat lower bandwidth demands. The most striking result is the cumulative difference between the benchmarks and the applications in terms of the requirements to sustain the floating-point operation rate: the DOE applications require significantly more data from main memory (not cache) per FLOP and dramatically more integer instructions per FLOP.

More Details

Calibration Under Uncertainty

Swiler, Laura P.; Trucano, Timothy G.

This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

More Details

A comparison of two optimization methods for mesh quality improvement

Proposed for publication in Engineering with Computers.

Knupp, Patrick K.

We compare inexact Newton and coordinate descent optimization methods for improving the quality of a mesh by repositioning the vertices, where the overall quality is measured by the harmonic mean of the mean-ratio metric. The effects of problem size, element size heterogeneity, and various vertex displacement schemes on the performance of these algorithms are assessed for a series of tetrahedral meshes.

More Details

Advanced mobile networking, sensing, and controls

Feddema, John T.; Byrne, Raymond H.; Lewis, Christopher L.; Harrington, John J.; Kilman, Dominique K.; Van Leeuwen, Brian P.; Robinett, R.D.

This report describes an integrated approach for designing communication, sensing, and control systems for mobile distributed systems. Graph theoretic methods are used to analyze the input/output reachability and structural controllability and observability of a decentralized system. Embedded in each network node, this analysis will automatically reconfigure an ad hoc communication network for the sensing and control task at hand. The graph analysis can also be used to create the optimal communication flow control based upon the spatial distribution of the network nodes. Edge coloring algorithms tell us that the minimum number of time slots in a planar network is equal to either the maximum number of adjacent nodes (or degree) of the undirected graph plus some small number. Therefore, the more spread out that the nodes are, the fewer number of time slots are needed for communication, and the smaller the latency between nodes. In a coupled system, this results in a more responsive sensor network and control system. Network protocols are developed to propagate this information, and distributed algorithms are developed to automatically adjust the number of time slots available for communication. These protocols and algorithms must be extremely efficient and only updated as network nodes move. In addition, queuing theory is used to analyze the delay characteristics of Carrier Sense Multiple Access (CSMA) networks. This report documents the analysis, simulation, and implementation of these algorithms performed under this Laboratory Directed Research and Development (LDRD) effort.

More Details

Glider communications and controls for the sea sentry mission

Feddema, John T.; Dohner, Jeffrey L.

This report describes a system level study on the use of a swarm of sea gliders to detect, confirm and kill littoral submarine threats. The report begins with a description of the problem and derives the probability of detecting a constant speed threat without networking. It was concluded that glider motion does little to improve this probability unless the speed of a glider is greater than the speed of the threat. Therefore, before detection, the optimal character for a swarm of gliders is simply to lie in wait for the detection of a threat. The report proceeds by describing the effect of noise on the localization of a threat once initial detection is achieved. This noise is estimated as a function of threat location relative to the glider and is temporally reduced through the use of an information or Kalman filtering. In the next section, the swarm probability of confirming and killing a threat is formulated. Results are compared to a collection of stationary sensors. These results show that once a glider has the ability to move faster than the threat, the performance of the swarm is equal to the performance of a stationary swarm of gliders with confirmation and kill ranges equal to detection range. Moreover, at glider speeds greater than the speed of the threat, swarm performance becomes a weak function of speed. At these speeds swarm performance is dominated by detection range. Therefore, to future enhance swarm performance or to reduce the number of gliders required for a given performance, detection range must be increased. Communications latency is also examined. It was found that relatively large communication delays did little to change swarm performance. Thus gliders may come to the surface and use SATCOMS to effectively communicate in this application.

More Details

Prism : a multi-view visualization tool for multi-physics simulation

Garasi, Christopher J.; Rogers, David R.

Complex simulations (in particular, those involving multiple coupled physics) cannot be understood solely using geometry-based visualizations. Such visualizations are necessary in interpreting results and gaining insights into kinematics, however they are insufficient when striving to understand why or how something happened, or when investigating a simulation's dynamic evolution. For multiphysics simulations (e.g. those including solid dynamics with thermal conduction, magnetohydrodynamics, and radiation hydrodynamics) complex interactions between physics and material properties take place within the code which must be investigated in other ways. Drawing on the extensive previous work in view coordination, brushing and linking techniques, and powerful visualization libraries, we have developed Prism, an application targeted for a specific analytic need at Sandia National Laboratories. This multiview scientific visualization tool tightly integrates geometric and phase space views of simulation data and material models. Working closely with analysts, we have developed this production tool to promote understanding of complex, multiphysics simulations. We discuss the current implementation of Prism, along with specific examples of results obtained by using the tool.

More Details
Results 9526–9550 of 9,998
Results 9526–9550 of 9,998