Our architecture is based on the Eclipse framework. The Eclipse IDE was originally developed at IBM as a Java development environment. The core of Eclipse (itself written in the Java language) was later extracted and became the basis of a general application framework for building modular applications. The Eclipse Platform, built on the OSGi component framework, provides a complete set of primitives for managing the lifecycles and interactions of a system of separate but complementary components. The Eclipse Rich Client Platform (RCP) implements a graphical interface on top of that framework, Applications built using the Eclipse Framework and the RCP are portable to many platforms and include both graphical desktop applications and headless server applications. Our set of integrated applications contains both graphical and non-graphical instances, as will be discussed.
The fundamental unit of composition in Eclipse is the OSGi plugin. Each plugin is a separately loadable software unit. A minimal plugin can contain nothing but declarative data stored in a manifest file, but most plugins contain Java code, Java or native libraries, images, scripts, and other data. A plugin can have no user interface, or optionally it can have a graphical user interface (GUI) that appears within the IDE “workbench.” It can also contribute menu items or other additions to customize the GUIs offered by other plugins. As used in SAW, each tool to be integrated is implemented as one or more plugins. Typically this small group of plugins – which we loosely refer to as a component — will have compile-time dependencies among themselves, but will not directly depend directly on plugins supporting other tools.
A plugin can depend on and interact explicitly with other plugins, but ideally plugins interact more abstractly through the use of Eclipse extension points. An extension point is a declarative (XML) description of a service that one plugin can offer to another. A plugin satisfies an extension point by implementing an extension. It is possible to query the Eclipse framework for all the extensions that implement a particular extension point. In this way, a consumer of a service (as defined by an extension point) can do so without compile-time knowledge of any plugins that provide that service. This means that multiple applications can be composed by selecting from a set of components, according to user needs. Each component can discover at runtime the providers or consumers of any services it involves. Component developers concentrate on delivering specific services and need not worry about how those services will be combined.
This architecture based on plugins and extension points allows us to use a technique called a bridge plugin to allow individual teams to develop separate but interacting plugins while operating with a great deal of autonomy and minimizing the need for communication between groups. A bridge plugin is a plugin A that implements an extension point declared in a plugin B in terms of the capabilities of a third plugin C. In this way plugins B and C (which generally interface to different tools and are created by different development teams) can directly interact even though the plugins have no interdependencies and the teams implementing them may in fact be completely unaware of one another.
Our set of components includes wrappers for the Sierra suite of analysis codes as well as a few other analysis codes, for the CUBIT meshing and geometry library (Owen, 2009), the Dakota optimization library (Adams, et al, 2014), and other tools. It also includes components for workflow editing and execution, general model building, parameter management, response extraction, data management, requirements management, remote computational job submission and monitoring, remote visualization, and more.
Declarative Component Definition
Many of our wrapper components are quite detailed and contain significant information about an external tool. While users want to see a GUI that exposes all the capabilities of the wrapped tool, hard-coding the necessary information (often in the form of input file syntax) would be both prohibitively expensive and very fragile as the codes evolve and syntax is added and removed. As a result, whenever possible we use a data-driven or declarative approach in which the syntax of a code is described in a data file, and graphical interfaces are generated at runtime from that description. Besides the reduction in implementation effort, ancillary benefits include a consistent appearance of generated GUI panels and easier and more complete testing and validation.
In some cases the developers of the wrapped code create the syntax definition file or can maintain it themselves, but in other cases that task falls to the integration team. It is still often better to use a declarative approach because changes in the wrapped code are easier to track and test.
We do not mandate a format for declarative description of input syntax, but rather try to accommodate formats created by various other development teams. Most tools use some form of XML. SAW includes several code generators that are driven by these various formats.
It is often the case that specific features of a code suggest a unique graphical interface presentation that cannot be specified within a simple general-purpose description format. As way of maximizing both optimal user presentation and simplicity of the format, we provide escape mechanisms for these cases in which hand-coded GUI panels can replace generated ones for specific syntax features. We have found this to be an optimal compromise between generated and hand coded GUIs.
Data Management
One central component of the Sandia Analysis Workbench is the Workbench Server that stores data in a commercial product data management (PDM) system. Our data management component (which interfaces to the Workbench Server) provides versioned storage, maintains relationships between artifacts, and is the basis for data security. Our data management model stores everything in a project. A project has an associated team that can access those files; access can be granted to individuals outside the team using access control lists integrated with our Laboratory’s directory system.
Our data management system was originally developed for interactive use, but over time have been interfaced to other tools in the Workbench including job submission and requirements management; in both cases the interface to data management adds useful capabilities to the other tools. All parts of a model and all related resources can be stored in context in our data management system.
The SAW team is collaborating with teams from other National Laboratories to create the successor to SAW’s Scientific Data Management (SDM) system. The Next-Gen SDM will be a portable, open-source solution. We have been developing requirements and a specification for the new system; the results will eventually be published.