Publications

43 Results
Skip to search filters

FY16 Strategic Themes White Paper

Leland, Robert; Leland, Robert; Leland, Robert

The Science and Technology (S&T) Division 1000 Strategic Plan includes the Themes, Goals, and Actions for FY16. S&T will continue to support the Labs Strategic plan, Mission Areas and Program Management Units by focusing on four strategic themes that align with the targeted needs of the Labs. The themes presented in this plan are Mission Engagement, Bold Outcomes, Collaborative Environment, and the Safety Imperative. Collectively they emphasize diverse, collaborative teams and a self-reliant culture of safety that will deliver on our promise of exceptional service in the national interest like never before. Mission Engagement focuses on increasing collaboration at all levels but with emphasis at the strategic level with mission efforts across the labs. Bold Outcomes seeks to increase the ability to take thoughtful risks with the goal of achieving transformative breakthroughs more frequently. Collaborative environment strives for a self-aware, collaborative working environment that bridges the many cultures of Sandia. Finally, Safety Imperative aims to minimize the risk of serious injury and to continuously strengthen the safety culture. Each of these themes is accompanied by a brief vision statement, several goals, and planned actions to support those goals throughout FY16 and leading into FY17.

More Details

FY16 Strategic Themes

Leland, Robert; Leland, Robert; Leland, Robert; Leland, Robert

I am pleased to present this summary of the Division 1000 Science and Technology Strategic Plan. This plan was created with considerable participation from all levels of management in Division 1000, and is intended to chart our course as we strive to contribute our very best in service of the greater Laboratory strategy. The plan is characterized by four strategic themes: Mission Engagement, Bold Outcomes, Collaborative Environment, and the Safety Imperative. Each theme is accompanied by a brief vision statement, several goals, and planned actions to support those goals throughout FY16. I want to be clear that this is not a strategy to be pursued in tension with the Laboratory strategic plan. Rather, it is intended to describe “how” we intend to show up for the “what” described in Sandia’s Strategic Plan. I welcome your feedback and look forward to our dialogue about these strategic themes. Please join me as we move forward to implement the plan in the coming year.

More Details

FY17 Strategic Themes

Leland, Robert; Leland, Robert; Leland, Robert

I am pleased to present this summary of the FY17 Division 1000 Science and Technology Strategic Plan. As this plan represents a continuation of the work we started last year, the four strategic themes (Mission Engagement, Bold Outcomes, Collaborative Environment, and Safety Imperative) remain the same, along with many of the goals. You will see most of the changes in the actions listed for each goal: We completed some actions, modified others, and added a few new ones. As I’ve stated previously, this is not a strategy to be pursued in tension with the Laboratory strategic plan. The Division 1000 strategic plan is intended to chart our course as we strive to contribute our very best in service of the greater Laboratory strategy. I welcome your feedback and look forward to our dialogue about these strategic themes. Please join me as we move forward to implement the plan in the coming months.

More Details

Performance, Efficiency, and Effectiveness of Supercomputers

Leland, Robert

Our primary purpose here is to offer to the general technical and policy audience a perspective on whether the supercomputing community should focus on improving the efficiency of supercomputing systems and their use rather than on building larger and ostensibly more capable systems that are used at low efficiency. After first summarizing our content and defining some necessary terms, we give a concise answer to this question. We then set this in context by characterizing performance of current supercomputing systems on a variety of benchmark problems and actual problems drawn from workloads in the national security, industrial, and scientific context. We also answer some related questions, identify some important technological trends, and offer a perspective on the significance of these trends. We hope by doing so to better equip the reader to evaluate commentary and controversy concerning supercomputing performance.

More Details

Large-Scale Data Analytics and Its Relationship to Simulation

Leland, Robert

Large-Scale Data Analytics (LSDA) problems require finding meaningful patterns in data sets that are so large as to require leading-edge processing and storage capability. LSDA problems are increasingly important for government mission work, industrial application, and scientific discovery. Effective solution of some important LSDA problems requires a computational workload that is substantially different from that associated with traditional High Performance Computing (HPC) simulations intended to help understand physical phenomena or to conduct engineering. While traditional HPC application codes exploit structural regularity and data locality to improve performance, many analytics problems lead more naturally to very fine-grained communication between unpredictable sets of processors, resulting in less regular communication patterns that do not map efficiently on to typical HPC systems. In both simulation and analytics domains, however, data movement increasingly dominates the performance, energy usage, and price of computing systems. It is therefore plausible that we could find a more synergistic technology path forward. Even though future machines may continue to be configured differently for the two domains, a more common technological roadmap between them in the form of a degree of convergence in the underlying componentry and design principles to address these common technical challenges could have substantial technical and economic benefits. 1 Senior Advisor, High Performance Computing, National Security and International Affairs Division, Office of Science and Technology Policy Institute 2 Senior Advanced Memory Systems Architect, DRAM Solutions Group, Micron Technologies, Inc. 3 Director, Computing Research, Sandia National Laboratories 4 Associate Laboratory Director for Computing Sciences, Lawrence Berkeley National Laboratory 5 Computational Sciences and Mathematics Division Manager, Pacific Northwest National Laboratory 6 Principal Member of Technical Staff, Sandia National Laboratories

More Details

Performance Efficiency and Effectivness of Supercomputers

Leland, Robert; Rajan, Mahesh R.; Heroux, Michael A.

Our first purpose here is to offer to a general technical and policy audience a perspective on whether the supercomputing community should focus on improving the efficiency of supercomputing systems and their use rather than on building larger and ostensibly more capable systems that are used at low efficiency. After first summarizing our content and defining some necessary terms, we give a concise answer to this question. We then set this in context by characterizing performance of current supercomputing systems on a variety of benchmark problems and actual problems drawn from workloads in the national security, industrial, and scientific context. Along the way we answer some related questions, identify some important technological trends, and offer a perspective on the significance of these trends. Our second purpose is to give a reasonably broad and transparent overview of the related issue space and thereby to better equip the reader to evaluate commentary and controversy concerning supercomputing performance. For example, questions repeatedly arise concerning the Linpack benchmark and its predictive power, so we consider this in moderate depth as an example. We also characterize benchmark and application performance for scientific and engineering use of supercomputers and offer some guidance on how to think about these. Examples here are drawn from traditional scientific computing. Other problem domains, for example, data analytics, have different performance characteristics that are better captured by different benchmark problems or applications, but the story in those domains is similar in character and leads to similar conclusions with regard to the motivating question. For more on this topic, see Large-Scale Data Analytics and Its Relationship to Simulation. 1 Director, Computing Research Center, Sandia National Laboratories 2 Distinguished Member of the Technical Staff, Sandia National Laboratories 3 Distinguished Member of the Technical Staff, Sandia National Laboratories 4 Distinguished Member of the Technical Staff , Sandia National Laboratories

More Details

Computing beyond Moore's Law

Computer

Shalf, John S.; Leland, Robert

Here, photolithography systems are on pace to reach atomic scale by the mid-2020s, necessitating alternatives to continue realizing faster, more predictable, and cheaper computing performance. If the end of Moore's law is real, a research agenda is needed to assess the viability of novel semiconductor technologies and navigate the ensuing challenges.

More Details

Advances in Domain Mapping of Massively Parallel Scientific Computations

Leland, Robert; Hendrickson, Bruce A.

One of the most important concerns in parallel computing is the proper distribution of workload across processors. For most scientific applications on massively parallel machines, the best approach to this distribution is to employ data parallelism; that is, to break the datastructures supporting a computation into pieces and then to assign those pieces to different processors. Collectively, these partitioning and assignment tasks comprise the domain mapping problem.

More Details

Architectural specification for massively parallel computers: An experience and measurement-based approach

Concurrency and Computation: Practice and Experience

Brightwell, Ronald B.; Camp, William; Cole, Benjamin; DeBenedictis, Erik; Leland, Robert; Tomkins, James; Maccabe, Arthur B.

In this paper, we describe the hardware and software architecture of the Red Storm system developed at Sandia National Laboratories. We discuss the evolution of this architecture and provide reasons for the different choices that have been made. We contrast our approach of leveraging high-volume, mass-market commodity processors to that taken for the Earth Simulator. We present a comparison of benchmarks and application performance that support our approach. We also project the performance of Red Storm and the Earth Simulator. This projection indicates that the Red Storm architecture is a much more cost-effective approach to massively parallel computing. Published in 2005 by John Wiley & Sons, Ltd.

More Details

Validating DOE's Office of Science "capability" computing needs

Leland, Robert; Camp, William

A study was undertaken to validate the 'capability' computing needs of DOE's Office of Science. More than seventy members of the community provided information about algorithmic scaling laws, so that the impact of having access to Petascale capability computers could be assessed. We have concluded that the Office of Science community has described credible needs for Petascale capability computing.

More Details

Dual mode use requirements analysis for the institutional cluster

Leland, Robert; Leland, Robert

This paper analyzes what additional costs would be incurred in supporting dual-mode, i.e. both classified and unclassified use of the Institutional Computing (IC) hardware. The following five options are considered: periods processing in which a fraction of the system alternates in time between classified and unclassified modes, static split in which the system is constructed as a set of smaller clusters which remain in one mode or the other, re-configurable split in which the system is constructed in a split fashion but a mechanism is provided to reconfigure it very infrequently, red/black switching in which a mechanism is provided to switch sections of the system between modes frequently, and complementary operation in which parts of the system are operated entirely in one mode at one geographical site and entirely in the other mode at the other geographical site and other systems are repartitioned to balance work load. These options are evaluated against eleven criteria such as disk storage costs, distance computing costs, reductions in capability and capacity as a result of various factors etc. The evaluation is both qualitative and quantitative, and is captured in various summary tables.

More Details

Comparative Study of Hexahedral and Tetrahedral Elements for Non-linear Structural Analysis

Leland, Robert

Finite elements are routinely used for analysis of real world problems in a wide range of engineering disciplines. The types of problems for which these are used include, but are not limited to, structural engineering, materials science, heat transfer, optics and electromagnetics. While linearity is a good assumption to start with in many problems, reasonable solutions to real-life problems require them to be treated as non-linear. It is, therefore, necessary that the users of finite element codes be aware of the capabilities and limitations of their analysis tools.

More Details
43 Results
43 Results