Publications

Results 1–200 of 313
Skip to search filters

Leveraging Production Visualization Tools In Situ

Mathematics and Visualization

Moreland, Kenneth D.; Bauer, Andrew C.; Geveci, Berk; O’Leary, Patrick; Whitlock, Brad

The visualization community has invested decades of research and development into producing large-scale production visualization tools. Although in situ is a paradigm shift for large-scale visualization, much of the same algorithms and operations apply regardless of whether the visualization is run post hoc or in situ. Thus, there is a great benefit to taking the large-scale code originally designed for post hoc use and leveraging it for use in situ. This chapter describes two in situ libraries, Libsim and Catalyst, that are based on mature visualization tools, VisIt and ParaView, respectively. Because they are based on fully-featured visualization packages, they each provide a wealth of features. For each of these systems we outline how the simulation and visualization software are coupled, what the runtime behavior and communication between these components are, and how the underlying implementation works. We also provide use cases demonstrating the systems in action. Both of these in situ libraries, as well as the underlying products they are based on, are made freely available as open-source products. The overviews in this chapter provide a toehold to the practical application of in situ visualization.

More Details

ATDM/ECP Milestone Memo WBS 2.3.4.04 / SNL ATDM Data and Visualization Projects STDV04-21 - [MS1/YR2] Q3: Prototype Catalyst/ParaView in-situ viz for unsteady RV flow on ATS-1

Moreland, Kenneth D.

ParaView Catalyst is an API for accessing the scalable visualization infrastructure of ParaView in an in-situ context. In-situ visualization allows simulation codes to access data post-processing operations while the simulation is running. In-situ techniques can reduce data post-processing time, allow computational steering, and increase the resolution and frequency of data output. For a simulation code to use ParaView Catalyst, adapter code needs to be created that interfaces the simulations data structures to ParaView/VTK data structures. Under ATDM, Catalyst is to be integrated with SPARC, a code used for simulation of unsteady reentry vehicle flow.

More Details

ECP ST Capability Assesment Report VTK-m

Moreland, Kenneth D.

The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors and many integrated core. The results of this project will be delivered in tools like Para View, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.

More Details

The future of scientific workflows

International Journal of High Performance Computing Applications

Deelman, Ewa; Peterka, Tom; Altintas, Ilkay; Carothers, Christopher D.; van Dam, Kerstin K.; Moreland, Kenneth D.; Parashar, Manish; Ramakrishnan, Lavanya; Taufer, Michela; Vetter, Jeffrey

Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on those workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.

More Details

ECP Milestone Report WBS 2.3.4.13 ECP/VTK-m FY18Q1 [MS-18/01-03] Multiblock / Gradients / Release STDA05-5

Moreland, Kenneth D.; Pugmire, David P.; Geveci, Berk G.

The FY18Q1 milestone of the ECP/VTK-m project includes the implementation of a multiblock data set, the completion of a gradients filtering operation, and the release of version 1.1 of the VTK-m software. With the completion of this milestone, the new multiblock data set allows us to iteratively schedule algorithms on composite data structures such as assemblies or hierarchies like AMR. The new gradient algorithms approximate derivatives of fields in 3D structures with finite differences. Finally, the release of VTK-m version 1.1 tags a stable release of the software that can more easily be incorporated into external projects.

More Details

XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17

Moreland, Kenneth D.; Pugmire, David P.; Rogers, David M.; Childs, Hank C.; Ma, Kwan-Liu M.; Geveci, Berk G.

The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

More Details

Milestone Completion Report WBS 1.3.5.05 ECP/VTK-m FY17Q4 [MS-17/03-06] Key Reduce / Spatial Division / Basic Advect / Normals STDA05-4

Moreland, Kenneth D.

The FY17Q4 milestone of the ECP/VTK-m project includes the completion of a key-reduce scheduling mechanism, a spatial division algorithm, an algorithm for basic particle advection, and the computation of smoothed surface normals. With the completion of this milestone, we are able to, respectively, more easily group like elements (a common visualization algorithm operation), provide the fundamentals for geometric search structures, provide the fundamentals for many flow visualization algorithms, and provide more realistic rendering of surfaces approximated with facets.

More Details

Milestone Completion Report WBS 1.3.5.05 ECP/VTK-m FY17Q3 [MS-17/02] Faceted Surface Normals STDA05-3

Moreland, Kenneth D.

The FY17Q3 milestone of the ECP/VTK-m project includes the completion of a VTK-m filter that computes normal vectors for surfaces. Normal vectors are those that point perpendicular to the surface and are an important direction when rendering the surface. The implementation includes the parallel algorithm itself, a filter module to simplify integrating it into other software, and documentation in the VTK-m Users’ Guide. With the completion of this milestone, we are able to necessary information to rendering systems to provide appropriate shading of surfaces. This milestone also feeds into subsequent milestones that progressively improve the approximation of surface direction.

More Details

XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

Moreland, Kenneth D.; Pugmire, David P.; Rogers, David M.; Childs, Hank C.; Ma, Kwan-Liu M.; Geveci, Berk G.

The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

More Details

Milestone Completion Report WBS 1.3.5.05 ECP/VTK-m FY17Q2 [MS-17/01] Better Dynamic Types Design SDA05-1

Moreland, Kenneth D.

The FY17Q2 milestone of the ECP/VTK-m project, which is the first milestone, includes the completion of design documents for the introduction of virtual methods into the VTK-m framework. Specifically, the ability from within the code of a device (e.g. GPU or Xeon Phi) to jump to a virtual method specified at run time. This change will enable us to drastically reduce the compile time and the executable code size for the VTK-m library. Our first design introduced the idea of adding virtual functions to classes that are used during algorithm execution. (Virtual methods were previously banned from the so called execution environment.) The design was straightforward. VTK-m already has the generic concepts of an “array handle” that provides a uniform interface to memory of different structures and an “array portal” that provides generic access to said memory. These array handles and portals use C++ templating to adjust them to different memory structures. This composition provides a powerful ability to adapt to data sources, but requires knowing static types. The proposed design creates a template specialization of an array portal that decorates another array handle while hiding its type. In this way we can wrap any type of static array handle and then feed it to a single compiled instance of a function. The second design focused on the mechanics of implementing virtual methods on parallel devices with a focus on CUDA. Our initial experiments on CUDA showed a very large overhead for using virtual C++ classes with virtual methods, the standard approach. Instead, we are using an alternate method provided by C that uses function pointers. With the completion of this milestone, we are able to move to the implementation of objects with virtual (like) methods. The upshot will be much faster compile times and much smaller library/executable sizes.

More Details

XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

Moreland, Kenneth D.; Sewell, Christopher S.; Childs, Hank C.; Ma, Kwan-Liu M.; Geveci, Berk G.; Meredith, Jeremy M.

The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

More Details

Why we use bad color maps and what you can do about it

Human Vision and Electronic Imaging 2016, HVEI 2016

Moreland, Kenneth D.

We know the rainbow color map is terrible, and it is emphatically reviled by the visualization community, yet its use continues to persist. Why do we continue to use a this perceptual encoding with so many known flaws? Instead of focusing on why we should not use rainbow colors, this position statement explores the rational for why we do pick these colors despite their flaws. Often the decision is influenced by a lack of knowledge, but even experts that know better sometimes choose poorly. A larger issue is the expedience that we have inadvertently made the rainbow color map become. Knowing why the rainbow color map is used will help us move away from it. Education is good, but clearly not sufficient. We gain traction by making sensible color alternatives more convenient. It is not feasible to force, a color map on users. Our goal is to supplant the rainbow color map as a common standard, and we w ill find that even those wedded to it will migrate away.

More Details

XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4

Moreland, Kenneth D.; Sewell, Christopher S.; Childs, Hank C.; Ma, Kwan-Liu M.; Geveci, Berk G.; Meredith, Jeremy M.

The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

More Details

Formal metrics for large-scale parallel performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Moreland, Kenneth D.; Oldfield, Ron A.

Performance measurement of parallel algorithms is well studied and well understood. However, a flaw in traditional performance metrics is that they rely on comparisons to serial performance with the same input. This comparison is convenient for theoretical complexity analysis but impossible to perform in large-scale empirical studies with data sizes far too large to run on a single serial computer. Consequently, scaling studies currently rely on ad hoc methods that, although effective, have no grounded mathematical models. In this position paper we advocate using a rate-based model that has a concrete meaning relative to speedup and efficiency and that can be used to unify strong and weak scaling studies.

More Details

A pervasive parallel framework for visualization: final report for FWP 10-014707

Moreland, Kenneth D.

We are on the threshold of a transformative change in the basic architecture of highperformance computing. The use of accelerator processors, characterized by large core counts, shared but asymmetrical memory, and heavy thread loading, is quickly becoming the norm in high performance computing. These accelerators represent significant challenges in updating our existing base of software. An intrinsic problem with this transition is a fundamental programming shift from message passing processes to much more fine thread scheduling with memory sharing. Another problem is the lack of stability in accelerator implementation; processor and compiler technology is currently changing rapidly. This report documents the results of our three-year ASCR project to address these challenges. Our project includes the development of the Dax toolkit, which contains the beginnings of new algorithms for a new generation of computers and the underlying infrastructure to rapidly prototype and build further algorithms as necessary.

More Details

Data co-processing for extreme scale analysis level II ASC milestone (4745)

Rogers, David R.; Moreland, Kenneth D.; Oldfield, Ron A.; Fabian, Nathan D.

Exascale supercomputing will embody many revolutionary changes in the hardware and software of high-performance computing. A particularly pressing issue is gaining insight into the science behind the exascale computations. Power and I/O speed con- straints will fundamentally change current visualization and analysis work ows. A traditional post-processing work ow involves storing simulation results to disk and later retrieving them for visualization and data analysis. However, at exascale, scien- tists and analysts will need a range of options for moving data to persistent storage, as the current o ine or post-processing pipelines will not be able to capture the data necessary for data analysis of these extreme scale simulations. This Milestone explores two alternate work ows, characterized as in situ and in transit, and compares them. We nd each to have its own merits and faults, and we provide information to help pick the best option for a particular use.

More Details
Results 1–200 of 313
Results 1–200 of 313