Publications

24 Results
Skip to search filters

For Trilinos tutorial

Rouson, Damian R.; Slattengren, Nicole S.

The objectives are: (1) To increase the adoption of Trilinos throughout DOE research communities that principally write Fortran, e.g. climate & combustion researchers; and (2) To maintain the OOP philosophy of the Trilinos project while using idioms that feel natural to Fortran programmers.

More Details

An overview of the Morfeus project

Rouson, Damian R.

The objectives of this project are to: (1) move scientific programmers to higher-level, platform-agnostic yet scalable abstractions; (2) to demonstrate general OOD patterns and distill new domain-specific patterns from multiphysics applications in Fortran; and (3) to construct an open-source framework that encourages the use of the demonstrated patterns. Some conclusions are: (1) Calculus illuminates a path toward highly asynchronous computing that blurs the task/data parallel distinction; (2) Fortran 2003 appears to have the expressiveness to support the general GoF design patterns in multiphysics applications; and (3) several domain-specific and language-specific patterns emerge along the way.

More Details

Object construction and destruction design patterns in Fortran 2003

Procedia Computer Science

Rouson, Damian R.; Xia, Jim; Xu, Xiaofeng

This paper presents object-oriented design patterns in the context of object construction and destruction. The examples leverage the newly supported object-oriented features of Fortran 2003. We describe from the client perspective two patterns articulated by Gamma et al. [1]: ABSTRACT FACTORY and FACTORY METHOD. We also describe from the implementation perspective one new pattern: the OBJECT pattern. We apply the Gamma et al. patterns to solve a partial differential equation, and we discuss applying the new pattern to a quantum vortex dynamics code. Finally, we address consequences and describe the use of the patterns in two open-source software projects: ForTrilinos and Morfeus.

More Details

Complexity in scalable computing

Proposed for publication in Scientific Programming.

Rouson, Damian R.

The rich history of scalable computing research owes much to a rapid rise in computing platform scale in terms of size and speed. As platforms evolve, so must algorithms and the software expressions of those algorithms. Unbridled growth in scale inevitably leads to complexity. This special issue grapples with two facets of this complexity: scalable execution and scalable development. The former results from efficient programming of novel hardware with increasing numbers of processing units (e.g., cores, processors, threads or processes). The latter results from efficient development of robust, flexible software with increasing numbers of programming units (e.g., procedures, classes, components or developers). The progression in the above two parenthetical lists goes from the lowest levels of abstraction (hardware) to the highest (people). This issue's theme encompasses this entire spectrum. The lead author of each article resides in the Scalable Computing Research and Development Department at Sandia National Laboratories in Livermore, CA. Their co-authors hail from other parts of Sandia, other national laboratories and academia. Their research sponsors include several programs within the Department of Energy's Office of Advanced Scientific Computing Research and its National Nuclear Security Administration, along with Sandia's Laboratory Directed Research and Development program and the Office of Naval Research. The breadth of interests of these authors and their customers reflects in the breadth of applications this issue covers. This article demonstrates how to obtain scalable execution on the increasingly dominant high-performance computing platform: a Linux cluster with multicore chips. The authors describe how deep memory hierarchies necessitate reducing communication overhead by using threads to exploit shared register and cache memory. On a matrix-matrix multiplication problem, they achieve up to 96% parallel efficiency with a three-part strategy: intra-node multithreading, non-blocking inter-node message passing, and a dedicated communications thread to facilitate concurrent communications and computations. On a quantum chemistry problem, they spawn multiple computation threads and communication threads on each node and use one-sided communications between nodes to minimize wait times. They reduce software complexity by evolving a multi-threaded factory pattern in C++ from a working, message-passing program in C.

More Details

Analysis-based arguments for abstract data type calculus

Rouson, Damian R.

Increasing demands on the complexity of scientific models coupled with increasing demands for their scalability are placing programming models on equal footing with the numerical methods they implement in terms of significance. A recurring theme across several major scientific software development projects involves defining abstract data types (ADTs) that closely mimic mathematical abstractions such as scalar, vector, and tensor fields. In languages that support user-defined operators and/or overloading of intrinsic operators, coupling ADTs with a set of algebraic and/or integro-differential operators results in an ADT calculus. This talk will analyze ADT calculus using three tool sets: object-oriented design metrics, computational complexity theory, and information theory. It will be demonstrated that ADT calculus leads to highly cohesive, loosely coupled abstractions with code-size-invariant data dependencies and minimal information entropy. The talk will also discuss how these results relate to software flexibility and robustness.

More Details

Object-oriented design patterns for multiphysics modeling in Fortran 2003

Rouson, Damian R.; Adalsteinsson, Helgi A.

The objectives of this presentation are to: catalog object-oriented software design patterns for multiphysics modeling; demonstrate them in Fortran 2003 and C++; and compare the capabilities of the two languages. The conclusions are: the presented patterns integrate multiple abstractions, allowing much of the numerics and physics to be determined at compile-time or runtime; negligible lines of Fortran emulate the required C++ features; and C++ requires considerable effort (or considerable reliance on libraries to relive that effort) to emulate the required Fortran 2003 features.

More Details
24 Results
24 Results